combination of Sandox and Anonymize actions?

Post a reply

Smilies
:D :) ;) :( :o :shock: :? 8-) :lol: :x :P :oops: :cry: :evil: :twisted: :roll: :!: :?: :idea: :arrow: :| :mrgreen: :geek: :ugeek:

BBCode is ON
[img] is ON
[url] is ON
Smilies are ON

Topic review
   

Expand view Topic review: combination of Sandox and Anonymize actions?

Re: combination of Sandox and Anonymize actions?

by GµårÐïåñ » Fri Aug 24, 2012 10:34 pm

tlu wrote:I certainly will :) Thanks for your reply!
You are very welcome, always.

Re: combination of Sandox and Anonymize actions?

by tlu » Fri Aug 24, 2012 11:13 am

GµårÐïåñ wrote: Just keep an eye out for it.
I certainly will :) Thanks for your reply!

Re: combination of Sandox and Anonymize actions?

by GµårÐïåñ » Thu Aug 23, 2012 9:38 pm

@tlu, unfortunately both Thrawn and I have been really busy, specially me. So while have been working on the interface, getting the ideas going, we are still working on the dev environment and deciding which approach to take on it, so that we can also preserve integration with NS in the future, so we are working on it and don't have an outside testable version ready yet, but when we do, we will post a thread on it and provide it for everyone who wants to test it. Just keep an eye out for it.

Re: combination of Sandox and Anonymize actions?

by tlu » Thu Aug 23, 2012 12:50 pm

Thrawn wrote:Is Anonymize+Sandbox on the radar to be implemented? I'd love to support it in SABER.
Thrawn, just out of curiosity: Have you made any progress with SABER? Is there an alpha/beta version to test? What you were planning to implement sounds very interesting, indeed!

Re: combination of Sandox and Anonymize actions?

by Thrawn » Fri Jun 22, 2012 9:27 am

Giorgio Maone wrote:The fact they cannot currently be combined is a bug in the implementation (not even in the grammar) and will eventually be fixed, even though there are currently many other priorities.
How would that look? The ABE Rules PDF indicates that each predicate contains one Action, and as soon as one rule matches, processing stops, so I'm not sure how it would allow two actions to be applied? Or does it mean that all predicates for a rule should in theory be applied, regardless of how many match?

Re: combination of Sandox and Anonymize actions?

by Thrawn » Tue Jun 19, 2012 10:20 am

Is Anonymize+Sandbox on the radar to be implemented? I'd love to support it in SABER. As well as the attacks that Giorgio mentioned, a policy of Anon+Sandbox could defend against:
  • CSRF/XSS originating from (unwisely) whitelisted sites.
  • XSS 0-days. Yes, I know Giorgio works his tail off to fix these, but I'd rather he didn't have to. Besides, 'default deny', instead of an arms race, is what makes NoScript so good in the first place.
  • XSS attacks on poorly-coded sites that require XSS filter exceptions.
It would be rather like running RequestPolicy, except that it wouldn't block static content like images (including web bugs...) or stylesheets, so less sites would break.

Re: combination of Sandox and Anonymize actions?

by Tom T. » Fri Dec 09, 2011 10:46 am

Giorgio Maone wrote:The two attacks he outlined are CSRF using a GET request (which in an ideal world would be a non-issue, since GET requests are not supposed to change the status of web application, but unfortunately incompetence is the rule) ...
Ahh, thank you, Giorgio. I knew that NS (Advanced > XSS) "Turn cross-site POST requests into (supposedly "idempotent" -- IIRC, that word used to be there) data-less GET requests". But IIUC, you are saying that site coders are so ignorant nowadays that they have, *in essence*, eliminated the distinction between POST and GET. Sad, indeed... :cry:

In a future release, when the ABE bug is fixed as noted, would you be able to include a default System Rule that protects even novices from this class of attack, without any configuration? Or would that break many pages, cause false positives, etc., thus requiring user-defined rules? If the former, I respectfully suggest to add that to the TODO as an RFE.

If not, ABE FAQ could perhaps create a generic template for moderate-level users to copy/paste as needed for their own sites... just one more thought for the many on your list. :)

MacOtaku wrote:Alright then; I shan't belabour the point any longer. Thanks everyone for your time and efforts, especially Giorgio and Tom. I'll keep checking the release notes, and in the meantime, I'll read the documentation Tom suggested again, since it's probably changed in the last few years.
You're very welcome, and the documentation most certainly has changed over time. And will continue to do so, although getting on the latest development build channel will provide info much faster, in almost real time, although very brief. Still, what you see may interest you to research the new feature, fix, etc.
MacOtaku wrote:Btw (O/T), on the spam filter false positive: I cleared Fx's recent history (cookies included) mid-writing, i.e., between logging in and submitting, because another site was exhibiting an annoying glitch. I didn't immediately remember that I'd done so before I clicked Preview, and so was initially a little surprised to be presented with a post form with a username box and a captcha. I clicked the new captcha button a couple of times, because I wasn't sure whether to include the punctuation in the first two. After I saw the "Oops" page, I realized what happened, and tried to post my message again after logging, and when that failed, I edited my post (significantly, I thought, but perhaps it was still too similar) and tried again. I don't know whether this is would be of any use, but I thought I should provide more details about what happened.]
No need to shrink that, and any glitch in the forum software should be reported. Since you were posting anyway, it's hard to see including that as going O/T. If a third party interrupted your main topic to say, "I had this login issue", yes, they should instead start a new thread for that. But I'm glad you included it. :)

My guess is that the best thing to do after the repeated failures would be to clear *everything* - cache, cookies, history, or just close the browser and start all over again. I just tried very briefly to reproduce that, by composing (and saving in a text doc, lol), then clearing all, then going to another open tab at this forum and hitting Reload. Indeed, I was given the reCaptcha treatment. But instead, I logged in, and had no trouble coming back to this partially-composed message, previewing, completing, and submitting. However, I did not go through all of the steps and iterations that you did. So I suspect that one or both of the first two recommendations would have worked -- not that it will ever happen again. :D
MacOtaku wrote:One final note: Installing Fx on supportees' computers, setting it as their default browser, installing NoScript, and adding a few HTTPS-only and ABE rules to insulate certain highly-targeted sites, together, have saved me about as much Windows clean-up time as getting people to use non-admin accounts and teaching them about the importance of unique & distinct passwords. Your efforts go a long way. Thanks again.
:) Thank you for those kind words. It encourages us to continue to donate our time to help here. And while I always hesitate to bother Giorgio unless/until certain that his response is needed (as here, e. g.,) I don't think he ever gets tired of receiving words of appreciation. 8-) I'll tap him on the shoulder (Web-ly speaking, of course) and I'm sure your real-world experiences with NoScript will brighten his day.

(and please tell your family, friends, co-workers, employees, supervisors, random strangers, enemies, etc. about NoScript. :D )

Re: combination of Sandox and Anonymize actions?

by Giorgio Maone » Thu Dec 08, 2011 10:26 am

Tom T. wrote: So, NS's XSS protection will not defeat the described attack, especially with third-party scripting denied in all but extraordinary cases?
(not counting SiteX.com + X-static.com; akamai.net, and other "benign" third parties.)
The two attacks he outlined are CSRF using a GET request (which in an ideal world would be a non-issue, since GET requests are not supposed to change the status of web application, but unfortunately incompetence is the rule) and exploiting a client side JavaScript logic flaw through data passed in the hash (which is even less likely but still possible).
Both are out of the scope of any XSS filter, because they're not cross-site scripting attacks, and are conducted against trusted web sites.

Re: combination of Sandox and Anonymize actions?

by Tom T. » Thu Dec 08, 2011 10:03 am

Giorgio Maone wrote:Please notice that ABE's Anonymize and Sandbox were designed to allow those who can bear the burden to protect themselves against the classes of attack which you outlined in your pastebin piece.
The fact they cannot currently be combined is a bug in the implementation (not even in the grammar) and will eventually be fixed, even though there are currently many other priorities.
Thank you for reporting.
So, NS's XSS protection will not defeat the described attack, especially with third-party scripting denied in all but extraordinary cases?
(not counting SiteX.com + X-static.com; akamai.net, and other "benign" third parties.)

Re: combination of Sandox and Anonymize actions?

by MacOtaku » Wed Dec 07, 2011 2:43 am

Alright then; I shan't belabour the point any longer. Thanks everyone for your time and efforts, especially Giorgio and Tom. I'll keep checking the release notes, and in the meantime, I'll read the documentation Tom suggested again, since it's probably changed in the last few years.

[Btw (O/T), on the spam filter false positive: I cleared Fx's recent history (cookies included) mid-writing, i.e., between logging in and submitting, because another site was exhibiting an annoying glitch. I didn't immediately remember that I'd done so before I clicked Preview, and so was initially a little surprised to be presented with a post form with a username box and a captcha. I clicked the new captcha button a couple of times, because I wasn't sure whether to include the punctuation in the first two. After I saw the "Oops" page, I realized what happened, and tried to post my message again after logging, and when that failed, I edited my post (significantly, I thought, but perhaps it was still too similar) and tried again. I don't know whether this is would be of any use, but I thought I should provide more details about what happened.]

One final note: Installing Fx on supportees' computers, setting it as their default browser, installing NoScript, and adding a few HTTPS-only and ABE rules to insulate certain highly-targeted sites, together, have saved me about as much Windows clean-up time as getting people to use non-admin accounts and teaching them about the importance of unique & distinct passwords. Your efforts go a long way. Thanks again.

Re: combination of Sandox and Anonymize actions?

by Giorgio Maone » Tue Dec 06, 2011 2:13 pm

Please notice that ABE's Anonymize and Sandbox were designed to allow those who can bear the burden to protect themselves against the classes of attack which you outlined in your pastebin piece.
The fact they cannot currently be combined is a bug in the implementation (not even in the grammar) and will eventually be fixed, even though there are currently many other priorities.
Thank you for reporting.

Re: combination of Sandox and Anonymize actions?

by Tom T. » Tue Dec 06, 2011 12:10 pm

MacOtaku wrote:I tried several times to post a reply, but it kept getting whacked by a false positive in the spam filter. I put my reply on Pastebin, at: Re: combination of Sandbox and Anonymize actions?
I'm not sure why, so I copy/pasted your post there, as raw data.
MacOtaku wrote: I guess the specific examples I chose were less than ideal. I selected them for familiarity, and because I am not aware of their currently having vulnerabilities of the class I brought up, which were meant hypothetically, not literally.

I think the general idea — to provide a means to allow only “safe” requests to passive resources without identifying headers — is still valuable. I know NS has heuristic XSS protection, based on sanitizing suspicious-looking text patterns preemptively, to prevent the consequences of input sanitation failures common to poorly-written sites. However, heuristics can fail, and there are other classes of exploits besides XSS.

I'll try a new example, but I'll stick to generalities this time, because I think it would be irresponsible for me to point out specific cases where this is currently possible: Let's say I'm logged into site X, which is vulnerable to CSRF. I also visit site Y, into which someone has inserted an exploit of the CSRF vulnerability in site X. (This could resemble an ordinary, benign request; it could just be a question of query string.) If the request to site X from site Y were to come with the Cookies (or Authorization) used by site X to authenticate my session, then the attack could succeed; but, if the third-party request to site X were to be Anonymized, and hence lack the authenticating Cookies (or Authorization), then the attack would (almost certainly) fail.

As another example: maybe site X also has JS which is permitted (necessarily, for the site to work), but which contains a logic error which is exploitable through a request which contains a fragment identifier containing Ajax parameters, of the same sort as are used in the normal operation of site X. (This again could resemble an ordinary, benign request; it could just be a manner of different parameter values.) If, on site Y, a frame is injected containing a page on site X with an exploitative fragment identifier, then since JS is permitted on site X, the exploit will cause will cause the script on site X to perform actions specified by the attack payload, such as generating malicious requests. If third-party requests to site X from site Y were to be Sandboxed, then the attack could not succeed, as the maliciously-inserted frame would contain a page on site X which (due to having been loaded by a third-party request) could not run JS.

In both these examples (even if both were attempted against the same site at once), if third-party requests to Site X were both Anonymized and Sandboxed, then both attacks would fail. Being able to so restrict third-party requests would block some classes of attack vectors altogether, without creating the need to allow sites to load images and other heavier-than-text assets from content distribution networks on a site-by-site basis.
I still believe that NoScript's XSS protection alone would defeat the attacks you described, but I believe it's time for Giorgio himself to address your specific scenarios in more detail. I'll ask him to respond at his earliest convenience.

In the meantime, please study http://noscript.net/features#xss

There is more than mere heueristics (although that is included).
Whenever a certain site tries to inject JavaScript code inside a different trusted (whitelisted and JavaScript enabled) site, NoScript filters the malicious request neutralizing its dangerous load.
That's not heueristic; it's merely detecting an improper origin trying to inject code into your current page.

The heueristics come into play when one of your *trusted* sites tries to inject code into another trusted site, and you can fine-tune it:
Furthermore, NoScript's sophisticated InjectionChecker engine checks also all the requests started from whitelisted origins for suspicious patterns landing on different trusted sites: if a potential XSS attack is detected, even if coming from a trusted source, Anti-XSS filters are promptly triggered.

This feature can be tweaked by changing the value of the noscript.injectionCheck about:config preference as follows:

0 - never check
1 - check cross-site requests from temporary allowed sites
2 - check every cross-site request (default)
3 - check every request
And
NoScript also protects against most XSS Type 2 (Persistent) attacks: in facts, the exploited vulnerabilities usually impose space constraints, therefore the attacker is often forced to rely on the inclusion of external scripts or IFrames from origins which are already blocked by default.
But still, I will ask Giorgio to fill in whatever I've missed here.

Re: combination of Sandox and Anonymize actions?

by MacOtaku » Tue Dec 06, 2011 7:40 am

I tried several times to post a reply, but it kept getting whacked by a false positive in the spam filter. I put my reply on Pastebin, at: Re: combination of Sandbox and Anonymize actions?

Re: combination of Sandox and Anonymize actions?

by Tom T. » Mon Dec 05, 2011 10:36 pm

He who does not know wrote:Tim, please understand that I am extremely grateful for your support and the hard work of everyone here, the very least I can do is state a clear thank you to all of you! Especially Giorgio Maone, thank you!
And thank you for your kind words. :)
Or I accept the security/privacy risks involved with visiting(whitelisting) domains hosting privacy/security compromising scripts on the main level domain. While using the default 2nd level domain (TA-2LD) feature.
I have to accept that all domains I visit are whitelisted throughout the whole browsing session, and not just during the visit of said domain. And those domains are still whitelisted while visiting completely different 2nd level domains.
Have you considered a slightly more-restrictive choice?
NS Options > General > Uncheck the "TA top-level sites by default", and check "Allow sites opened through bookmarks."

The idea is that we bookmark only the sites we're likely to revisit some number of times (and usually, have some level of trust, or at least more than a random site that TLD allows). So when you browse to a new site, or click a link to get to a new site, the "auto-allow top-level" does NOT apply. They stay default-denied, which gives you a chance to look it over before deciding whether you need script there, and whether it deserves your trust.
There is no way to make NS only whitelist(by default) a certain domain only during the visit of that domain and revoke the temporary allow after leaving it
.
This feature *may* have been requested before, but I like the idea: "Revoke temporary permissions upon closing connection to site". (Last tab closes, if you have multiple open.)
The XSS feature of NS protects against the majority of XSS risks that could rise from this blatant whitelisting.
Practically *all* of them. And when someone does invent a new way to exploit this (or any other exploit), Giorgio drops all else (hopefully not his baby :o ), and rushes to add protection against that to NoScript. Which ks why auto-updating NS is important, or even better, get on the latest development build channel. You'll be among the first to be protected, and feedback from those who test release-candidate builds is very much appreciated.
Using google was a flawed example. The example I wanted to use was a website hosting scripts on the same domain that it hosts scripts or privacy violating content to run on 3rd party websites.
Could you please name a few sites, so I can see for myself what it is you're facing, and what you're trying to accomplish? NS is very capable of being fine-tuned.
I am doing it the wrong way and I currently like it :).
"Wrong" is a judgment or opinion. If you're happy, we're happy. Just truing to show easier ways to accomplish this, and possibly increase your safety and flexibility (e. g., to view 3rd-party Flash video).
Requestpolicy is definitely an option that also uses specific rules for specific domains(whitelisting).
I've been using both for years -- because Giorgio recommended it to me. :) I tend to listen to his advice. :ugeek:
Let me repeat. I am extremely grateful for this free product and I enjoy the security it gives me every single day.

To quote Vash the stampede: Love and peace!

Thank you!
And the same nice wishes to you.
- Tom :)

Re: combination of Sandox and Anonymize actions?

by He who does not know » Mon Dec 05, 2011 11:31 am

Tim, please understand that I am extremely grateful for your support and the hard work of everyone here, the very least I can do is state a clear thank you to all of you! Especially Giorgio Maone, thank you!

I had a hunch on those facts from the get go, but thanks to you I now have them more clearly stated.

Either I do not allow scripts anywhere. And manually whitelist all the necessary domains that actually need whitelisting during my browsing. It is of course up to argument how many is actually needed and that is depending on browsing habits, including what features are necessary and how many new domains are visited each day.

Or I accept the security/privacy risks involved with visiting(whitelisting) domains hosting privacy/security compromising scripts on the main level domain. While using the default 2nd level domain (TA-2LD) feature.
I have to accept that all domains I visit are whitelisted throughout the whole browsing session, and not just during the visit of said domain. And those domains are still whitelisted while visiting completely different 2nd level domains.
There is no way to make NS only whitelist(by default) a certain domain only during the visit of that domain and revoke the temporary allow after leaving it. The XSS feature of NS protects against the majority of XSS risks that could rise from this blatant whitelisting.

What I would like to achieve, could be achieved by using ABE combined with default TA-2LD. With the cost of losing the ability to view 3rd party flash.

Using google was a flawed example. The example I wanted to use was a website hosting scripts on the same domain that it hosts scripts or privacy violating content to run on 3rd party websites.

What I would like to do is pretty much what NS is about NOT doing in the first place, possibly because it is not sane. Having a general filter applied to all websites, removing only what I do not want (anonymize,sandbox with exceptions). Rather than specific rules for specific websites. To put it in a clear statement: I am doing it the wrong way and I currently like it :).

Requestpolicy is definitely an option that also uses specific rules for specific domains(whitelisting).

I will not try to waste your or anyone else's time by asking to have this feature added since it seems like a very uncommon approach. I will not try to waste your or anyone else's time arguing that it is a sane method. Your time is much better spent helping people wanting to use NS in a way that it is constructed for.

Let me repeat. I am extremely grateful for this free product and I enjoy the security it gives me every single day.

To quote Vash the stampede: Love and peace!

Thank you!

Top