Page 1 of 1
xss filter is processing blocked requests
Posted: Tue Jun 15, 2010 12:17 am
by al_9x
fx 3.6.3, ns 1.9.9.90, new profile, forbid iframes, apply to trusted
load the following page from a local web server
Code: Select all
<iframe src="http://url.invalid/?s=%3Cmeta"></iframe>
you get the xss alert
Re: xss filter is processing blocked requests
Posted: Tue Jun 15, 2010 8:54 pm
by Giorgio Maone
Known issue. IFrame blocking currently happens after XSS checks for various implementation reasons.
Re: xss filter is processing blocked requests
Posted: Tue Jun 15, 2010 9:28 pm
by al_9x
Giorgio Maone wrote:Known issue. IFrame blocking currently happens after XSS checks for various implementation reasons.
What are your plans for this? XSS alerts are troublesome enough, you can't really tell what happened, why or what to do about it, so it would be nice not to get them on non existent requests.
One question about sanitization in general. Blocking a suspicious request seems reasonable, but couldn't modifying it and letting it proceed have unpredictable consequences?
Re: xss filter is processing blocked requests
Posted: Tue Jun 15, 2010 9:38 pm
by Giorgio Maone
al_9x wrote:
What are your plans for this?
No plan at this moment. The implementation issue which led to put things in this order are still present in the Firefox resource loading pipeline.
al_9x wrote:Blocking a suspicious request seems reasonable, but couldn't modifying it and letting it proceed have unpredictable consequences?
Nope, or at least no unwanted consequences have been observed/reported so far because:
- Only GET (idempotent) requests are "sanitized"
- POST requests (the ones which may cause a change in the server side status, e.g. a transaction to be executed) are turned into GET requests and their payload is entirely removed, so they can't cause any side effect either.
Compared to this, Microsoft's and Google's approaches have been proven to be pretty more problematic regarding consequences, with Microsoft's XSS filter causing new XSS vulnerabilities of its own and Google's allowing an attacker to disable JavaScript on a web page at will.
Re: xss filter is processing blocked requests
Posted: Tue Jun 15, 2010 10:09 pm
by al_9x
Giorgio Maone wrote:al_9x wrote:
What are your plans for this?
No plan at this moment. The implementation issue which led to put things in this order are still present in the Firefox resource loading pipeline.
During xss sanitization is it not possible to check that a request is coming from an iframe that will be blocked?
Giorgio Maone wrote:al_9x wrote:Blocking a suspicious request seems reasonable, but couldn't modifying it and letting it proceed have unpredictable consequences?
Nope, or at least no unwanted consequences have been observed/reported so far because:
- Only GET (idempotent) requests are "sanitized"
idempotence is supposed (but you can't really rely on it) to hold for the original request. However after you modify it and send unexpected data, even a single request can have unintended consequences. After all, feeding unexpected/garbage data to software is a testing technique. It's meant to reveal bugs and crash things. I don't think revealing bugs is a good idea on a live app.
Re: xss filter is processing blocked requests
Posted: Tue Jun 15, 2010 10:23 pm
by Giorgio Maone
al_9x wrote:idempotence is suppose (but you can't really rely on it) to hold for the original request.
Idempotence is supposed by the usage of the GET method, at least in the HTTP protocol intentions, independently of the content of the original/modified request, because this method should never be used to modify the server status (POST, PUT and DELETE serve that purpose).
As you said, you can't really rely on GET being used always consistently with HTTP, but nevertheless the kind of sanitization performed by NoScript, i.e. replacing key characters such as <, (, " and ' with whitespace (ASCII 32) and turning some well known identifiers (such as "document") to uppercase is very unlikely to have serious side effects on its own, even in the buggy case where GET is used with a transactional meaning.
al_9x wrote:
During xss sanitization is it not possible to check that a request is coming from an iframe that will be blocked?
Not sure about it (IIRC, in the phase where XSS checks need to take place you can't often tell whether the request is really going to be loaded in a frame or in a new window), but I'll check.
Re: xss filter is processing blocked requests
Posted: Tue Jun 15, 2010 11:06 pm
by al_9x
There is more to this than idempotence. Even if the request is non transactional in it's original (expected) form, after it's modified, and unexpected data is fed, any piece of code in the software stack processing it, could react unexpectedly.
You know software is typically written for the expected case, with error handling and then testing an afterthought. Therefore feeding unexpected data to a live app can not be asserted to be safe.
Furthermore, beyond the safety of it, are the sanitized requests semantically (to the app) equivalent to the original? Doesn't seem possible. If they are not, then even when perfectly safe, they will not perform their intended task, becoming a noop.
So a sanitized request is at best a noop and at worst a bug, so what's the point of issuing it, am I missing something?
Re: xss filter is processing blocked requests
Posted: Wed Jun 16, 2010 7:34 pm
by Giorgio Maone
al_9x wrote:So a sanitized request is at best a noop and at worst a bug, so what's the point of issuing it, am I missing something?
Nope, in many case it just does what is meant to do, maybe with slight less precision (think of a search engine request for "location=name", which gets turned into a search for "LOCATION NAME", which will probably get a very similar resultset).
Furthermore, this design lets you examine the landing page and the source request, rather than blocking you and triggering your "allow everything but let me pass" reflex.
Re: xss filter is processing blocked requests
Posted: Wed Jun 16, 2010 8:22 pm
by al_9x
You've picked a convenient example, a search engine, designed to be tolerant, to ignore strange characters and concentrate on words. Whereas I am referring to applications (e.g. banking, purchasing, billing) where the data exchanged is highly structured and encoded, making them far more sensitive to any modification. In applications, sanitization can not be asserted to be safe, nor can a sanitized request be expected to perform the original task. It is therefore at best, useless.