With ABE finally being released, I've skimmed a little of the info on it and one thing caught my eye: allowing web sites to define a rules.abe file that can be automatically read and used by NoScript. One of the key problems with web content is that you cannot blindly trust even "trusted" web sites because they could be compromised. So if the web site has been compromised such that there are now attacks in the web pages, how is it valid for NoScript to assume that the rules.abe file is still trustworthy?
Isn't a more secure solution to have a NoScript controlled database of rules and simply let web sites _refer_ to their rules by index/name? For example, some_auction_site.com has a file, HTTP header, or meta tag that says "use the 'some_auction_site US payments' NoScript ABE rules" which triggers NoScript to retrieve and use those rules for that page. In other words, since we cannot trust the web sites themselves because they could be compromised, isn't placing our trust into a NoScript controlled database safer?
-Foam
ABE for Web Authors
ABE for Web Authors
Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.11) Gecko/2009060215 Firefox/3.0.11 (.NET CLR 3.5.30729)
- Giorgio Maone
- Site Admin
- Posts: 9524
- Joined: Wed Mar 18, 2009 11:22 pm
- Location: Palermo - Italy
- Contact:
Re: ABE for Web Authors
@Foam Head:
Don't worry, that's no way for rules.abe to be exploited maliciously to any effective advantage: if an attacker can overwrite it, he can already do anything else ABE could, and much more.
BTW, notice that site-specific rules defined in rules.abe files apply exclusively to the web site hosting them, therefore in the worst scenario the attacker can prevent ABE users from opening the site and nothing worse.
Even in this case, all the users needs to do is opening NoScript Options|ABE and disabling the offending ruleset, which is listed there.
Don't worry, that's no way for rules.abe to be exploited maliciously to any effective advantage: if an attacker can overwrite it, he can already do anything else ABE could, and much more.
BTW, notice that site-specific rules defined in rules.abe files apply exclusively to the web site hosting them, therefore in the worst scenario the attacker can prevent ABE users from opening the site and nothing worse.
Even in this case, all the users needs to do is opening NoScript Options|ABE and disabling the offending ruleset, which is listed there.
Mozilla/5.0 (Windows; U; Windows NT 5.2; en-US; rv:1.9.1) Gecko/20090624 Firefox/3.5 (.NET CLR 3.5.30729)
Re: ABE for Web Authors
Ah, righto. Restricting foo.com's rules.abe so it cannot set rules for bar.com is what I was concerned about.Giorgio Maone wrote: BTW, notice that site-specific rules defined in rules.abe files apply exclusively to the web site hosting them, therefore in the worst scenario the attacker can prevent ABE users from opening the site and nothing worse.
Also, at what level do these rules apply? If I have http://www.some.web.site.com, does it load http://www.some.web.site.com/rules.abe, web.site.com/rules.abe, or site.com/rules.abe? And for whichever one was loaded, can it set rules for higher/lower sub-domains? I can see reasons why you'd want to restrict everything to the base url's full hostname, but I can also see reasons why you'd want to have a generic rules.abe at the 2nd or 3rd level sub-domain.
Lastly, I saw that "first matching rule ends searching" in the ABE Rules PDF, but I didn't see a reference to which is processed first: system.abe, user.abe, or the site specific rules.abe. I assume it's system, user, then rules so you can maintain proper security, right?
EDIT: Don't answer the last one -- it's answered here

Thanks,
-Foam
Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1) Gecko/20090624 Firefox/3.5 (.NET CLR 3.5.30729)
- Giorgio Maone
- Site Admin
- Posts: 9524
- Joined: Wed Mar 18, 2009 11:22 pm
- Location: Palermo - Italy
- Contact:
Re: ABE for Web Authors
If you've got http://www.some.web.site.com, it currently does not load anything.Foam Head wrote: Also, at what level do these rules apply? If I have http://www.some.web.site.com, does it load http://www.some.web.site.com/rules.abe, web.site.com/rules.abe, or site.com/rules.abe?
Since fetching ABE rules is potentially time consuming, and they're served on HTTPS only for obvious security reasons, and if you open a HTTP URL you're not guaranteed to have also something listening on SSL (and a site served through HTTP is not worth protecting against CSRF IMO anyway), rules are fetched only first time you open a HTTPS URL cross-site, and exclusively from the same root as the URL you're opening.
So if, for instance, site http://evil.com includes https://www.some.web.site.com/logout.do as an image, rules from https://www.some.web.site.com/rules.abe are fetched, parsed and enforced before the "image" can be loaded and log you out, then they're cached for 24 hours or until you restart your browser.
It's restricted to the base URL's full host name, indeed (and once fetched, it applies to plain HTTP URLs as well).Foam Head wrote: And for whichever one was loaded, can it set rules for higher/lower sub-domains? I can see reasons why you'd want to restrict everything to the base url's full hostname
If you want to share the same ruleset across multiple site, you can soft-link it at the filesystem level, but I deemed falling back to the top domain automatically or through redirections potentially dangerous, albeit convenient.
Mozilla/5.0 (Windows; U; Windows NT 5.2; en-US; rv:1.9.1) Gecko/20090624 Firefox/3.5 (.NET CLR 3.5.30729)
Re: ABE for Web Authors
This is a great explanation of how ABE works. To be honest, after reading all of http://noscript.net/abe I didn't even come close to understanding this. IMHO this kind of description should be added to http://noscript.net/abe ASAP.Giorgio Maone wrote: If you've got http://www.some.web.site.com, it currently does not load anything.
Since fetching ABE rules is potentially time consuming, and they're served on HTTPS only for obvious security reasons, and if you open a HTTP URL you're not guaranteed to have also something listening on SSL (and a site served through HTTP is not worth protecting against CSRF IMO anyway), rules are fetched only first time you open a HTTPS URL cross-site, and exclusively from the same root as the URL you're opening.
So if, for instance, site http://evil.com includes https://www.some.web.site.com/logout.do as an image, rules from https://www.some.web.site.com/rules.abe are fetched, parsed and enforced before the "image" can be loaded and log you out, then they're cached for 24 hours or until you restart your browser.
Also, while I see the logic behind limiting to https requests at the start, is this going to become an option in the future? Smaller web sites (including the forums here at informaction.com) don't use SSL for anything, but are still worth protecting IMHO.
Lastly, why are you imposing 24 hour/restart cache limitations for an http rules.abe file? Shouldn't rules.abe be treated just like any other http resource; a-la it is stored in the browser's cache and checked for changes per the http cache related headers?
Now that I understand how ABE actually retrieves the rules.abe files, I see that having lower level rules.abe files would only further complicate and delay their retrieval.Giorgio Maone wrote:It's restricted to the base URL's full host name, indeed (and once fetched, it applies to plain HTTP URLs as well).Foam Head wrote: And for whichever one was loaded, can it set rules for higher/lower sub-domains? I can see reasons why you'd want to restrict everything to the base url's full hostname
If you want to share the same ruleset across multiple site, you can soft-link it at the filesystem level, but I deemed falling back to the top domain automatically or through redirections potentially dangerous, albeit convenient.
Thanks for the responses and keep up the good work!

-Foam
Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1) Gecko/20090624 Firefox/3.5 (.NET CLR 3.5.30729)