combination of Sandox and Anonymize actions?

Discussions about the Application Boundaries Enforcer (ABE) module
MacOtaku
Posts: 5
Joined: Wed May 19, 2010 2:44 am

combination of Sandox and Anonymize actions?

Post by MacOtaku »

Could an action be provided to both sandbox and anonymize a request (i.e., strip Authorization and Cookie headers, turn methods different than GET, HEAD and OPTIONS into GET, remove upload data, then send the modified request with JavaScript and other active content (e.g. plugin embeddings) disabled in the landing page)? [paraphrasing ABE Rules Syntax and Capabilities there]
Last edited by Tom T. on Mon Oct 15, 2012 6:15 am, edited 1 time in total.
Reason: restore link to full size
Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3
He who does not know

Re: combination of Sandox and Anonymize actions?

Post by He who does not know »

I believe I have managed to do just that.

By putting the Anonymize rule in SYSTEM and the sandbox rule in USER.

SYSTEM:
Site *
Accept from SELF++
Anonymize

USER:
Site ALL
Accept from SELF++
Sandbox

Using it combined with the SELF++ rule and "temporary allow 2nd level domain".
To only run scripts on, and anonymize data out of the current domain. Even though other domains have been temporary allowed throughout the browsing session that might be connected to the current one.
Aka, in this case. 3rd party domains that have been whitelisted by visiting them.

Seems to work so far, but do take notice to my username :)

Am I doing it wrong? Or is this explanation just confusing?

Cheers!
Mozilla/5.0 (Ubuntu; X11; Linux x86_64; rv:8.0) Gecko/20100101 Firefox/8.0
MacOtaku
Posts: 5
Joined: Wed May 19, 2010 2:44 am

Re: combination of Sandox and Anonymize actions?

Post by MacOtaku »

Interesting approach. I'll have to see whether I can make that work.

I still think this would be good as an action in the ABE grammar. It would allow protection against two different sets of attack classes for the same site at the same time, without needing to resort to an arcane work-around.

It would be useful for sites which both contain their own active content, and allow for user-generated content. It would allow for, e.g., ensuring that member-posted external assets in a forum can only be plain text, images, etc, and that they would not carry any superfluous information of the sort that can facilitate such things as tracking browsing habits. So if, for example, someone manages to post an attack script on a site with poorly-written (or absent) content sanitation, the script wouldn't load (Sandbox). At the same time, if someone posts an external image, loaded from a server they control, on the same site and on other sites, they wouldn't be able to use cookies to track everyone who looks at the post, and thereby correlate the online identities of members across multiple sites and build profiles of screen names (and IP addresses) to target (Anonymize).
Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:8.0.1) Gecko/20100101 Firefox/8.0.1
He who does not know

Re: combination of Sandox and Anonymize actions?

Post by He who does not know »

I absolutely agree, that is the very reason I tried that solution.

As well as having the general protection of only allowing the "native" scripts on all websites, making it very challenging for XSS. While maintaining as much of a functioning main website as possible.

I ended up having to abandon this approach though, since several of my favorite sites sadly use 3rd party providers of flash video hosting. Had I not used ABE, I could have used rules in noscript to allow it, but with my limited knowledge I have not yet found a workaround for that particular issue.

Currently I have to settle with just using anonymize, that does provide quite decent protection though.

In your scenario however, you would never happen to end up surfing to that individuals unreliable server in a session though? So if you were to just use allow 2nd level domain and anonymize, scripts should never be run from the external server anyways?

I mainly tried to use the solution as a way to stop tracking scripts from being run on other sites after happening to visit the tracking server in a browser session (google).
Mozilla/5.0 (Ubuntu; X11; Linux x86_64; rv:8.0) Gecko/20100101 Firefox/8.0
Tom T.
Field Marshal
Posts: 3620
Joined: Fri Mar 20, 2009 6:58 am

Re: combination of Sandox and Anonymize actions?

Post by Tom T. »

He who does not know wrote:I mainly tried to use the solution as a way to stop tracking scripts from being run on other sites after happening to visit the tracking server in a browser session (google).
I must be missing something here. If that was your purpose, why couldn't you just mark the tracking script Untrusted after happening to visit its server? It should show in NS Menu.

Which script? Google-analytics.com? For many popular tracking scripts, including G-A, NoScript will run Surrogate Script by default anyway. Please visit that article when you have a chance.

Sorry if I'm missing some element here.
Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.24) Gecko/20111103 Firefox/3.6.24
He who does not know

Re: combination of Sandox and Anonymize actions?

Post by He who does not know »

I must be missing something here. If that was your purpose, why couldn't you just mark the tracking script Untrusted after happening to visit its server? It should show in NS Menu.

Which script? Google-analytics.com? For many popular tracking scripts, including G-A, NoScript will run Surrogate Script by default anyway. Please visit that article when you have a chance.

Sorry if I'm missing some element here.
Thank you for answering.

You are absolutely right, however I am simply too lazy to add 100s of tracking/privacy/security violating sites to the untrusted list. I just used google as an example.
I would much rather block them all. While avoiding the risk of happening to visit one of them breaches the privacy/security for the rest of the browsing session.
Trying to walk the line between effort and security.

Surrogates are not run on whitelisted sites right?

Sorry for being unclear.
Last edited by Tom T. on Sun Dec 04, 2011 12:19 pm, edited 1 time in total.
Reason: add missing quote tag
Mozilla/5.0 (Ubuntu; X11; Linux x86_64; rv:8.0) Gecko/20100101 Firefox/8.0
Tom T.
Field Marshal
Posts: 3620
Joined: Fri Mar 20, 2009 6:58 am

Re: combination of Sandox and Anonymize actions?

Post by Tom T. »

(I edited your post to add the missing quote tag at the beginning of my words.)
He who does not know wrote:You are absolutely right, however I am simply too lazy to add 100s of tracking/privacy/security violating sites to the untrusted list. I just used google as an example.I would much rather block them all. While avoiding the risk of happening to visit one of them breaches the privacy/security for the rest of the browsing session.
This is a common misconception. NoScript is "whitelist-based". What this means is that the minute you install it, every script on the planet is blocked by default.
(Except for the sites in the default whitelist, some of which are required, and some of which are for the convenience of novice users who will be upset if their Yahoo mail or Gmail doesn't work right away, etc. You can, and should, remove any that don't apply to you. NoScript Options > Whitelist.)

There is no need to mark a script as Untrusted to block it. It will not run unless you specifically mark it as "Allow" or "Temporarily Allow".
"Allow" adds it to your permanent whitelist. Temp-allow lasts until the browser is closed, so if you leave the site you temp-allowed, it's safer to open NS menu and click "Revoke temporary permissions" first.

The main purpose of Untrusted is to avoid those pesky sites from constantly cluttering your Menu of scripts allowed and blocked. When marked as Untrusted, the script name shows only if you open the menu and point to Untrusted, assuming that it's trying to run at that site.

Does this reassure you?

All of this is in the NoScript FAQ, which is in the highest box on the home page (Board Index) of this site -- with the note "Please read before asking." ;)
Perhaps you could browse the FAQ at your leaisure, especially 1.5 and 1.11.

Or read the NoScript Quick Start Guide, which is highlighted at the top of the forum, "NoScript Support".
Trying to walk the line between effort and security.
Using a machine gun to kill a fly. :) It's much less effort than it looks like, and ABE can be reserved until after the basics become easy and natural.
Surrogates are not run on whitelisted sites right?
It's not the site you're on; it's the site trying to run the script for which there is a surrogate.
Example:
I have a site I like, friend.com. It's in my whitelist -- all scripting from friend.com is allowed.
But friend.com tries to run google-analytics.com, which I don't care to do. So long as you don't allow or temp-allow google-analytics.com, your friendly NoScript will *automatically* run the harmless surrogate in place of the privacy-invading data-miner. You don't have to do *anything.*

Is this an amazing tool, or what? :D

For the list of sites that have default surrogates, enter about:config in the Address bar, then enter
surr
in the Filter bar. That will bring up the list of script sources for which there are surrogates. If the name isn't apparent on the left side, look in the right side, under "Value". More in this sticky thread that is at the top of NoScript Support, Development, and General.
Sorry for being unclear.
No problem. If you still have questions after reading the above sources, please use the forum Search feature to see if anyone else has already asked it. If not, please feel free to ask anytime. 8-)

Edited to add:
3rd party domains that have been whitelisted by visiting them.
Visiting a site does not whitelist it. Again, *all* sites (with the exceptions in the default whitelist) are blocked. Only you can whitelist a site.
Is this now clear?
Mozilla/5.0 (Windows NT 5.1; rv:8.0.1) Gecko/20100101 Firefox/8.0.1
He who does not know

Re: combination of Sandox and Anonymize actions?

Post by He who does not know »

This is a common misconception. NoScript is "whitelist-based". What this means is that the minute you install it, every script on the planet is blocked by default.
(Except for the sites in the default whitelist, some of which are required, and some of which are for the convenience of novice users who will be upset if their Yahoo mail or Gmail doesn't work right away, etc. You can, and should, remove any that don't apply to you. NoScript Options > Whitelist.)

There is no need to mark a script as Untrusted to block it. It will not run unless you specifically mark it as "Allow" or "Temporarily Allow".
"Allow" adds it to your permanent whitelist. Temp-allow lasts until the browser is closed, so if you leave the site you temp-allowed, it's safer to open NS menu and click "Revoke temporary permissions" first.

The main purpose of Untrusted is to avoid those pesky sites from constantly cluttering your Menu of scripts allowed and blocked. When marked as Untrusted, the script name shows only if you open the menu and point to Untrusted, assuming that it's trying to run at that site.
I know that NS is whitelist-based and not blacklist. However, as I might have been unclear about the fact: I am using NS with the "temporary allow 2nd level top domain by default" ( I mentioned it in the first post, possibly not clear enough).

Hence, when I surf to an unreliable site that uses tracking/privacy/exploit scripts on the same domain as the website I am open to these scripts during my whole browsing session, no matter which domain i visit as long as it has requests on that domain.
I find it too tedious to revoke temporary permissions after exiting every domain all the time, or shutting down the browser after every domain.
Using a machine gun to kill a fly. :) It's much less effort than it looks like, and ABE can be reserved until after the basics become easy and natural.
This is why I want to use ABE, since it ignores the too permanent whitelisting NS does the way I have set it up.
If i were to not allow scripts at all by default that is also too tedious.
Allow scripts globally is not tedious at all, however that is slightly too unsecure for me, but if there is no choice I can accept it. I am currently using FFx this way together with ABE anonymize and it is working great, but perhaps a bit too insecure.

When I ran temporary allow top domain 2nd level by default with sandbox and anon in ABE, it sadly broken the possibility to run flash from 3rd party domains (which is actually a feature of sandbox).

If I run temporary allow top domain 2nd level by default without ABE all works fine, until I happen to surf to a domain I don't want session allow on all websites, no matter 3rd domain or not. (lets pretend google has tracking scripts on google.com as well. I use google to search but I dont want google tracking to run on downwithsomeoppressivegovernment.now

Thank you for all the help and answers, I will study the text you linked to in order to see if I have missed anything
Mozilla/5.0 (Ubuntu; X11; Linux x86_64; rv:8.0) Gecko/20100101 Firefox/8.0
MacOtaku
Posts: 5
Joined: Wed May 19, 2010 2:44 am

Re: combination of Sandox and Anonymize actions?

Post by MacOtaku »

[Oops: I just realized I typo-ed "Sandbox" in the subject of the original post.]

He who does not know and I seem to have distinct but overlapping sets of goals here. I keep NoScript in its default whitelist mode, but I'm not just looking to protect against third-party scripts on whitelisted second-party sites (where the first party is me and the second party is the top-level site in a tab), which white-list mode would (mostly [see * bottom]) accomplish anyway.

Since he already used Google, I'll use Amazon as one example of what I'd like to accomplish: I want all the features of Amazon.c{a,om,o.uk} to work correctly, while I'm shopping on Amazon. I also want to see the books referenced on sites I read, because they are relevant to subject and recommended by the site authors (and while these authors have financial incentives to recommend some book, they haven't incentive to recommend one title over another, so they recommend books they read and liked, which helps me find good books.) I also want all the features of the sites I'm viewing to work. I do not, however, want Amazon to ever know about which sites I visit, even while logged into Amazon, except only when I follow an affiliate link from a site, which I do in order to support the site author.

So, I want JavaScript on site A to work on site A, and JavaScript on site B to work on site B; I want site B to load non-active content from site A, and want to follow links from B to A, without site A getting any cookies or other unnecessary data. (I've taken care of the "Referer" with another extension, and can rewrite GET requests to omit unneeded parameters, if need be, with another extension.) I think I can accomplish most of this, on a per-site basis, with something like:

Code: Select all

# Anonymize requests to Amazon from other sites:
Site .amazon.ca .amazon.com .amazon.co.uk .images-amazon.com .ssl-images-amazon.com
Accept from SELF++ .amazon.ca .amazon.com .amazon.co.uk
Deny INCLUSION(SCRIPT, OBJ, OBJSUB)
Sandbox INCLUSION(SUBDOC)
Anonymize
Maybe I'm misunderstanding, but that Sandbox line, which is meant to allow a frame (with text and an image, including a link) to load from Amazon, while Amazon is a third party, while preventing that third-party Amazon frame from loading JavaScript from Amazon, has a consequence of allowing cookies from Amazon to be sent to Amazon with the request for the frame. I don't want the request for the frame to come with cookies, as that would facilitate correlating the request for the frame with my Amazon account. I only want Amazon to know about another site I visit in one case: when I follow an affiliate link, in order to buy a book and give the recommending site author credit for pointing it out to me, so the site author gets some help with the cost of their site (in return for doing me the favour of recommending a good book on the subject I'm reading about).

I have similar rules for search engines, social networks, video-sharing sites, and other site with pervasive third-party content, which I want to work only on my terms. Ideally, I want to be able to make this work by default, so I don't need to create new rules for every site for which I want to allow active content and cookies only when they are the second party. (I do understand, of course, that in cases like Amazon, where one company uses multiple domain names, I may have to write a rule, but there's still the main issue of which action to use to write such rules.)

Privacy is not the only goal here, but I thought that would be the easiest example use-case to explain. The general idea is: I want to make sites, by default, only able to make upload-data-less, Authorization-less, Cookie-less third-party GET, HEAD, and OPTIONS requests for non-active content, even when upload data, Authorization, Cookies, active content, and POST (PUT, DELETE, etc) requests would be allowed if the third-party site were instead second-party (i.e., the top-level site). This may, for example, mitigate exploitability of vulnerabilities in scripts which are allowed to run on a given site: poorly as the site's scripting may be written, I still want the site to work (and unfortunately many poorly-written sites break altogether without JS enabled), and I don't generally expect most sites to attempt to exploit themselves. For the relatively few sites which have active content which I want to be run on third-party sites, I can make exceptions.

* Another issue this would be helpful for, which whitelist mode doesn't entirely protect against, is preventing a malicious script which finds its way onto one trusted site from effecting an account on another trusted site. For example, if a successful exploit against a vulnerability in Facebook, for which I have JS enabled (so the site will work), manages to get a malicious script onto FB, I don't want that script to be able to exploit a vulnerability on which might exist on (say) Amazon, even if I happen to be logged into Amazon while reading my FB news feed. (Again, I could probably accomplish this on a site-by-site basis, but I'd like to make this work in the default case. Also, one other complication, in this example: I do want to be able to follow a links from FB to Amazon.) If the request from FB doesn't come with authentication (Anonymize), and can't cause any scripts to be loaded from Amazon (Sandbox), then the exploit of the hypothetical FB vulnerability shouldn't be able to lead to an exploit of the hypothetical Amazon vulnerability. Would it be all that difficult to provide a means to combine the restrictions of Sandbox and Anonymize, so we keep all our bases covered? I think it would be good to be able to extend the whitelist pattern from per-site to per-way-of-interaction with a given site.
Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:8.0.1) Gecko/20100101 Firefox/8.0.1
Tom T.
Field Marshal
Posts: 3620
Joined: Fri Mar 20, 2009 6:58 am

Re: combination of Sandox and Anonymize actions?

Post by Tom T. »

He who does not know wrote:I know that NS is whitelist-based and not blacklist. However, as I might have been unclear about the fact: I am using NS with the "temporary allow 2nd level top domain by default" ( I mentioned it in the first post, possibly not clear enough).
Not at all. I wasn't so much wading into the ABE discussion per se. What caught my eye was your stated goal, and on seeing all the ABE stuff, I admit I kind of glazed over it and went to, "Why is he even talking about ABE for a simple scripting issue"? So yes, I missed the 2ld line.
He who does not know wrote:Hence, when I surf to an unreliable site that uses tracking/privacy/exploit scripts on the same domain as the website I am open to these scripts during my whole browsing session, no matter which domain i visit as long as it has requests on that domain.
Not exactly -- more in a minute. But there are good reasons not to use the TA-2LD, and it's less effort than you might think to do it the safer way.
I find it too tedious to revoke temporary permissions after exiting every domain all the time, or shutting down the browser after every domain.
Yep. BUT -- google-analytics.com is NOT a sub-domain of google.com. If it were analytics.google.com, it would be.

Real example:
I use Yahoo mail. For best security, I do *not* allow the 2ld yahoo.com, either by whitelisting, or your way, checking the box on General tab.

Yahoo.com includes the following *actual* sub-domains. (Third-level domain names.)
mail.yahoo.com
finance.yahoo.com
news.yahoo.com
Etc.

I find that all I need to whitelist (once, then never again) is:
mail.yahoo.com
mail.yimg.com (also *not* a subdomain of yahoo, but a separate base 2nd-LD.)

For certain rarely-used features, I may temp-allow yahooapis.com -- *also* not the same domain as yahoo, but I'm sure you get that now.

Google works fine without scripting allowed. You lose the auto-suggest feature, but you also lose them pinpointing your location much more accurately than the Firefox controversial geolocation feature, even with the latter disabled. about:config > geo.enabled if true, double-click to toggle to False.
But if you allow Google's scripting, they can nail you much more accurately, and both posters here seem very privacy-conscious, as am I.

You'll find that the list of sites at which you really *need* the scripting, regardless of whether the site would like to run it or how much you trust it, is much smaller than you think. My whitelist currently has twelve (12) entries, not counting about:(something). And the Untrusted list is about two lines long, that's all.
All else is blocked by default. How hard is this, once you set up your favorite trusted sites, versus working out complex ABE rules?
Tom T. wrote:Using a machine gun to kill a fly. :) It's much less effort than it looks like, and ABE can be reserved until after the basics become easy and natural.
He who does not know wrote:This is why I want to use ABE, since it ignores the too permanent whitelisting NS does the way I have set it up.
See above. Don't whitelist what doesn't need to be.
He who does not know wrote:If i were to not allow scripts at all by default that is also too tedious.
If all else fails, read the instructions. ;) ;) ;) It is something that can be learned very quickly, after which there is no tedium.

I face tedium, because in doing support here, users ask me to visit hundreds of sites that I would never visit on my own, and I do need to figure out what's needed there. But this gets kind of intuitive after a while, so unless you plan to join the support team :) , or visit fifty *new* sites every day, it just isn't that hard. I'll bet it took me longer to write the NoScript Quick Start Guide, get feedback from the rest of the team, and post it than it would take you to read and absorb it. ;)

Giorgio put many hours into the FAQ, and many more into updating them as needed, and thousands of hours to build, maintain, and enhance a free tool. You can't spend half an hour or so some day browsing the FAQ?
He who does not know wrote:Allow scripts globally is not tedious at all, however that is slightly too unsecure for me,
Defeats most of the purpose of NS, but still offers protections you can't get in IE.
He who does not know wrote: but if there is no choice I can accept it.
Don't.
He who does not know wrote:I am currently using FFx this way together with ABE anonymize and it is working great, but perhaps a bit too insecure.
If you're happy with your ABE settings, just add the very simple precautions above -- the ones which were the original basis for NoScript.
He who does not know wrote:When I ran temporary allow top domain 2nd level by default with sandbox and anon in ABE, it sadly broken the possibility to run flash from 3rd party domains (which is actually a feature of sandbox).
Get RequestPolicy. Use no default whitelist. Uncheck that 2LD, as discussed. When SiteX.com offers a YT video, TA it in RequestPolicy (or since RP allows you to specify both origin and destination, which will be coming in NS3.x -- a few hundred more hours by Giorgio, and it will be out soon), you can permanently allow requests from SiteX to YT, or wherever the video is hosted. YouTube scripts are in the default whitelist, though not actually needed for basic services. (Advanced features may require them.) If the video doesn't play, open NS menu, and if script from the video site is blocked, allow it. If you trust both, make that a permanent allow, and there's something else you'll never have to do again.
If I run temporary allow top domain 2nd level by default without ABE all works fine, until I happen to surf to a domain I don't want session allow on all websites, no matter 3rd domain or not. (lets pretend google has tracking scripts on google.com as well. I use google to search but I dont want google tracking to run on downwithsomeoppressivegovernment.now
See above. google-analytics.com is blocked by default, (even at Google! :D ) and the safe surrogate runs by default. Why do you feel the burden of having to do something?
He who does not know wrote:Thank you for all the help and answers, I will study the text you linked to in order to see if I have missed anything
No offense is intended in saying that most of it was missed. ;) I think you'll rethink this whole approach after having the facts.

Would you pilot an airplane without ever taking a flying lesson, then once you got in the air (that part's easy; it's landing that's tricky), try to figure out the various controls, how they function, what is safe and what isn't?
MacOtaku wrote: * Another issue this would be helpful for, which whitelist mode doesn't entirely protect against, [[WRONG]] is preventing a malicious script which finds its way onto one trusted site from effecting an account on another trusted site. For example, if a successful exploit against a vulnerability in Facebook, for which I have JS enabled (so the site will work), manages to get a malicious script onto FB, I don't want that script to be able to exploit a vulnerability on which might exist on (say) Amazon, even if I happen to be logged into Amazon while reading my FB news feed. (Again, I could probably accomplish this on a site-by-site basis, but I'd like to make this work in the default case.
One hates to sound like a broken record, but our savior Signore Maone has already accomplished all of that for you, and more. (Another thousand or so hours of his time. I did the math once -- he works 36 hours/day.)

What you have described is a classic Cross-Site Scripting Attack, or XSS, and NoScript not only protects you at Amazon by default (is anyone getting tired of the word "default" in relation to NS protections yet?), it will actually protect you at Facebook as well -- by default.

So again sounding like a broken record, please read the XSS FAQ. Then come back here and let us know if you feel well-protected. :ugeek:

I'm not touching the ABE part. It's best to learn to walk before trying to fly. :D
Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.24) Gecko/20111103 Firefox/3.6.24
He who does not know

Re: combination of Sandox and Anonymize actions?

Post by He who does not know »

Tim, please understand that I am extremely grateful for your support and the hard work of everyone here, the very least I can do is state a clear thank you to all of you! Especially Giorgio Maone, thank you!

I had a hunch on those facts from the get go, but thanks to you I now have them more clearly stated.

Either I do not allow scripts anywhere. And manually whitelist all the necessary domains that actually need whitelisting during my browsing. It is of course up to argument how many is actually needed and that is depending on browsing habits, including what features are necessary and how many new domains are visited each day.

Or I accept the security/privacy risks involved with visiting(whitelisting) domains hosting privacy/security compromising scripts on the main level domain. While using the default 2nd level domain (TA-2LD) feature.
I have to accept that all domains I visit are whitelisted throughout the whole browsing session, and not just during the visit of said domain. And those domains are still whitelisted while visiting completely different 2nd level domains.
There is no way to make NS only whitelist(by default) a certain domain only during the visit of that domain and revoke the temporary allow after leaving it. The XSS feature of NS protects against the majority of XSS risks that could rise from this blatant whitelisting.

What I would like to achieve, could be achieved by using ABE combined with default TA-2LD. With the cost of losing the ability to view 3rd party flash.

Using google was a flawed example. The example I wanted to use was a website hosting scripts on the same domain that it hosts scripts or privacy violating content to run on 3rd party websites.

What I would like to do is pretty much what NS is about NOT doing in the first place, possibly because it is not sane. Having a general filter applied to all websites, removing only what I do not want (anonymize,sandbox with exceptions). Rather than specific rules for specific websites. To put it in a clear statement: I am doing it the wrong way and I currently like it :).

Requestpolicy is definitely an option that also uses specific rules for specific domains(whitelisting).

I will not try to waste your or anyone else's time by asking to have this feature added since it seems like a very uncommon approach. I will not try to waste your or anyone else's time arguing that it is a sane method. Your time is much better spent helping people wanting to use NS in a way that it is constructed for.

Let me repeat. I am extremely grateful for this free product and I enjoy the security it gives me every single day.

To quote Vash the stampede: Love and peace!

Thank you!
Mozilla/5.0 (Ubuntu; X11; Linux x86_64; rv:8.0) Gecko/20100101 Firefox/8.0
Tom T.
Field Marshal
Posts: 3620
Joined: Fri Mar 20, 2009 6:58 am

Re: combination of Sandox and Anonymize actions?

Post by Tom T. »

He who does not know wrote:Tim, please understand that I am extremely grateful for your support and the hard work of everyone here, the very least I can do is state a clear thank you to all of you! Especially Giorgio Maone, thank you!
And thank you for your kind words. :)
Or I accept the security/privacy risks involved with visiting(whitelisting) domains hosting privacy/security compromising scripts on the main level domain. While using the default 2nd level domain (TA-2LD) feature.
I have to accept that all domains I visit are whitelisted throughout the whole browsing session, and not just during the visit of said domain. And those domains are still whitelisted while visiting completely different 2nd level domains.
Have you considered a slightly more-restrictive choice?
NS Options > General > Uncheck the "TA top-level sites by default", and check "Allow sites opened through bookmarks."

The idea is that we bookmark only the sites we're likely to revisit some number of times (and usually, have some level of trust, or at least more than a random site that TLD allows). So when you browse to a new site, or click a link to get to a new site, the "auto-allow top-level" does NOT apply. They stay default-denied, which gives you a chance to look it over before deciding whether you need script there, and whether it deserves your trust.
There is no way to make NS only whitelist(by default) a certain domain only during the visit of that domain and revoke the temporary allow after leaving it
.
This feature *may* have been requested before, but I like the idea: "Revoke temporary permissions upon closing connection to site". (Last tab closes, if you have multiple open.)
The XSS feature of NS protects against the majority of XSS risks that could rise from this blatant whitelisting.
Practically *all* of them. And when someone does invent a new way to exploit this (or any other exploit), Giorgio drops all else (hopefully not his baby :o ), and rushes to add protection against that to NoScript. Which ks why auto-updating NS is important, or even better, get on the latest development build channel. You'll be among the first to be protected, and feedback from those who test release-candidate builds is very much appreciated.
Using google was a flawed example. The example I wanted to use was a website hosting scripts on the same domain that it hosts scripts or privacy violating content to run on 3rd party websites.
Could you please name a few sites, so I can see for myself what it is you're facing, and what you're trying to accomplish? NS is very capable of being fine-tuned.
I am doing it the wrong way and I currently like it :).
"Wrong" is a judgment or opinion. If you're happy, we're happy. Just truing to show easier ways to accomplish this, and possibly increase your safety and flexibility (e. g., to view 3rd-party Flash video).
Requestpolicy is definitely an option that also uses specific rules for specific domains(whitelisting).
I've been using both for years -- because Giorgio recommended it to me. :) I tend to listen to his advice. :ugeek:
Let me repeat. I am extremely grateful for this free product and I enjoy the security it gives me every single day.

To quote Vash the stampede: Love and peace!

Thank you!
And the same nice wishes to you.
- Tom :)
Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.24) Gecko/20111103 Firefox/3.6.24
MacOtaku
Posts: 5
Joined: Wed May 19, 2010 2:44 am

Re: combination of Sandox and Anonymize actions?

Post by MacOtaku »

I tried several times to post a reply, but it kept getting whacked by a false positive in the spam filter. I put my reply on Pastebin, at: Re: combination of Sandbox and Anonymize actions?
Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:8.0.1) Gecko/20100101 Firefox/8.0.1
Tom T.
Field Marshal
Posts: 3620
Joined: Fri Mar 20, 2009 6:58 am

Re: combination of Sandox and Anonymize actions?

Post by Tom T. »

MacOtaku wrote:I tried several times to post a reply, but it kept getting whacked by a false positive in the spam filter. I put my reply on Pastebin, at: Re: combination of Sandbox and Anonymize actions?
I'm not sure why, so I copy/pasted your post there, as raw data.
MacOtaku wrote: I guess the specific examples I chose were less than ideal. I selected them for familiarity, and because I am not aware of their currently having vulnerabilities of the class I brought up, which were meant hypothetically, not literally.

I think the general idea — to provide a means to allow only “safe” requests to passive resources without identifying headers — is still valuable. I know NS has heuristic XSS protection, based on sanitizing suspicious-looking text patterns preemptively, to prevent the consequences of input sanitation failures common to poorly-written sites. However, heuristics can fail, and there are other classes of exploits besides XSS.

I'll try a new example, but I'll stick to generalities this time, because I think it would be irresponsible for me to point out specific cases where this is currently possible: Let's say I'm logged into site X, which is vulnerable to CSRF. I also visit site Y, into which someone has inserted an exploit of the CSRF vulnerability in site X. (This could resemble an ordinary, benign request; it could just be a question of query string.) If the request to site X from site Y were to come with the Cookies (or Authorization) used by site X to authenticate my session, then the attack could succeed; but, if the third-party request to site X were to be Anonymized, and hence lack the authenticating Cookies (or Authorization), then the attack would (almost certainly) fail.

As another example: maybe site X also has JS which is permitted (necessarily, for the site to work), but which contains a logic error which is exploitable through a request which contains a fragment identifier containing Ajax parameters, of the same sort as are used in the normal operation of site X. (This again could resemble an ordinary, benign request; it could just be a manner of different parameter values.) If, on site Y, a frame is injected containing a page on site X with an exploitative fragment identifier, then since JS is permitted on site X, the exploit will cause will cause the script on site X to perform actions specified by the attack payload, such as generating malicious requests. If third-party requests to site X from site Y were to be Sandboxed, then the attack could not succeed, as the maliciously-inserted frame would contain a page on site X which (due to having been loaded by a third-party request) could not run JS.

In both these examples (even if both were attempted against the same site at once), if third-party requests to Site X were both Anonymized and Sandboxed, then both attacks would fail. Being able to so restrict third-party requests would block some classes of attack vectors altogether, without creating the need to allow sites to load images and other heavier-than-text assets from content distribution networks on a site-by-site basis.
I still believe that NoScript's XSS protection alone would defeat the attacks you described, but I believe it's time for Giorgio himself to address your specific scenarios in more detail. I'll ask him to respond at his earliest convenience.

In the meantime, please study http://noscript.net/features#xss

There is more than mere heueristics (although that is included).
Whenever a certain site tries to inject JavaScript code inside a different trusted (whitelisted and JavaScript enabled) site, NoScript filters the malicious request neutralizing its dangerous load.
That's not heueristic; it's merely detecting an improper origin trying to inject code into your current page.

The heueristics come into play when one of your *trusted* sites tries to inject code into another trusted site, and you can fine-tune it:
Furthermore, NoScript's sophisticated InjectionChecker engine checks also all the requests started from whitelisted origins for suspicious patterns landing on different trusted sites: if a potential XSS attack is detected, even if coming from a trusted source, Anti-XSS filters are promptly triggered.

This feature can be tweaked by changing the value of the noscript.injectionCheck about:config preference as follows:

0 - never check
1 - check cross-site requests from temporary allowed sites
2 - check every cross-site request (default)
3 - check every request
And
NoScript also protects against most XSS Type 2 (Persistent) attacks: in facts, the exploited vulnerabilities usually impose space constraints, therefore the attacker is often forced to rely on the inclusion of external scripts or IFrames from origins which are already blocked by default.
But still, I will ask Giorgio to fill in whatever I've missed here.
Mozilla/5.0 (Windows NT 5.1; rv:8.0.1) Gecko/20100101 Firefox/8.0.1
User avatar
Giorgio Maone
Site Admin
Posts: 9454
Joined: Wed Mar 18, 2009 11:22 pm
Location: Palermo - Italy
Contact:

Re: combination of Sandox and Anonymize actions?

Post by Giorgio Maone »

Please notice that ABE's Anonymize and Sandbox were designed to allow those who can bear the burden to protect themselves against the classes of attack which you outlined in your pastebin piece.
The fact they cannot currently be combined is a bug in the implementation (not even in the grammar) and will eventually be fixed, even though there are currently many other priorities.
Thank you for reporting.
Mozilla/5.0 (Windows NT 5.2; WOW64; rv:8.0) Gecko/20100101 Firefox/8.0
Post Reply