He who does not know wrote:I mainly tried to use the solution as a way to stop tracking scripts from being run on other sites after happening to visit the tracking server in a browser session (google).
I must be missing something here. If that was your purpose, why couldn't you just mark the tracking script Untrusted after happening to visit its server? It should show in NS Menu.
Which script? Google-analytics.com? For many popular tracking scripts, including G-A, NoScript will run Surrogate Script by default anyway. Please visit that article when you have a chance.
Sorry if I'm missing some element here.
He who does not know wrote:You are absolutely right, however I am simply too lazy to add 100s of tracking/privacy/security violating sites to the untrusted list. I just used google as an example.I would much rather block them all. While avoiding the risk of happening to visit one of them breaches the privacy/security for the rest of the browsing session.
Trying to walk the line between effort and security.
Surrogates are not run on whitelisted sites right?
Sorry for being unclear.
3rd party domains that have been whitelisted by visiting them.
This is a common misconception. NoScript is "whitelist-based". What this means is that the minute you install it, every script on the planet is blocked by default.
(Except for the sites in the default whitelist, some of which are required, and some of which are for the convenience of novice users who will be upset if their Yahoo mail or Gmail doesn't work right away, etc. You can, and should, remove any that don't apply to you. NoScript Options > Whitelist.)
There is no need to mark a script as Untrusted to block it. It will not run unless you specifically mark it as "Allow" or "Temporarily Allow".
"Allow" adds it to your permanent whitelist. Temp-allow lasts until the browser is closed, so if you leave the site you temp-allowed, it's safer to open NS menu and click "Revoke temporary permissions" first.
The main purpose of Untrusted is to avoid those pesky sites from constantly cluttering your Menu of scripts allowed and blocked. When marked as Untrusted, the script name shows only if you open the menu and point to Untrusted, assuming that it's trying to run at that site.
Using a machine gun to kill a fly. It's much less effort than it looks like, and ABE can be reserved until after the basics become easy and natural.
# Anonymize requests to Amazon from other sites:
Site .amazon.ca .amazon.com .amazon.co.uk .images-amazon.com .ssl-images-amazon.com
Accept from SELF++ .amazon.ca .amazon.com .amazon.co.uk
Deny INCLUSION(SCRIPT, OBJ, OBJSUB)
He who does not know wrote:I know that NS is whitelist-based and not blacklist. However, as I might have been unclear about the fact: I am using NS with the "temporary allow 2nd level top domain by default" ( I mentioned it in the first post, possibly not clear enough).
He who does not know wrote:Hence, when I surf to an unreliable site that uses tracking/privacy/exploit scripts on the same domain as the website I am open to these scripts during my whole browsing session, no matter which domain i visit as long as it has requests on that domain.
I find it too tedious to revoke temporary permissions after exiting every domain all the time, or shutting down the browser after every domain.
Tom T. wrote:Using a machine gun to kill a fly. It's much less effort than it looks like, and ABE can be reserved until after the basics become easy and natural.
He who does not know wrote:This is why I want to use ABE, since it ignores the too permanent whitelisting NS does the way I have set it up.
He who does not know wrote:If i were to not allow scripts at all by default that is also too tedious.
He who does not know wrote:Allow scripts globally is not tedious at all, however that is slightly too unsecure for me,
He who does not know wrote: but if there is no choice I can accept it.
He who does not know wrote:I am currently using FFx this way together with ABE anonymize and it is working great, but perhaps a bit too insecure.
He who does not know wrote:When I ran temporary allow top domain 2nd level by default with sandbox and anon in ABE, it sadly broken the possibility to run flash from 3rd party domains (which is actually a feature of sandbox).
If I run temporary allow top domain 2nd level by default without ABE all works fine, until I happen to surf to a domain I don't want session allow on all websites, no matter 3rd domain or not. (lets pretend google has tracking scripts on google.com as well. I use google to search but I dont want google tracking to run on downwithsomeoppressivegovernment.now
He who does not know wrote:Thank you for all the help and answers, I will study the text you linked to in order to see if I have missed anything
MacOtaku wrote:* Another issue this would be helpful for, which whitelist mode doesn't entirely protect against, [[WRONG]] is preventing a malicious script which finds its way onto one trusted site from effecting an account on another trusted site. For example, if a successful exploit against a vulnerability in Facebook, for which I have JS enabled (so the site will work), manages to get a malicious script onto FB, I don't want that script to be able to exploit a vulnerability on which might exist on (say) Amazon, even if I happen to be logged into Amazon while reading my FB news feed. (Again, I could probably accomplish this on a site-by-site basis, but I'd like to make this work in the default case.
He who does not know wrote:Tim, please understand that I am extremely grateful for your support and the hard work of everyone here, the very least I can do is state a clear thank you to all of you! Especially Giorgio Maone, thank you!
Or I accept the security/privacy risks involved with visiting(whitelisting) domains hosting privacy/security compromising scripts on the main level domain. While using the default 2nd level domain (TA-2LD) feature.
I have to accept that all domains I visit are whitelisted throughout the whole browsing session, and not just during the visit of said domain. And those domains are still whitelisted while visiting completely different 2nd level domains.
.There is no way to make NS only whitelist(by default) a certain domain only during the visit of that domain and revoke the temporary allow after leaving it
The XSS feature of NS protects against the majority of XSS risks that could rise from this blatant whitelisting.
Using google was a flawed example. The example I wanted to use was a website hosting scripts on the same domain that it hosts scripts or privacy violating content to run on 3rd party websites.
I am doing it the wrong way and I currently like it .
Requestpolicy is definitely an option that also uses specific rules for specific domains(whitelisting).
Let me repeat. I am extremely grateful for this free product and I enjoy the security it gives me every single day.
To quote Vash the stampede: Love and peace!
MacOtaku wrote:I tried several times to post a reply, but it kept getting whacked by a false positive in the spam filter. I put my reply on Pastebin, at: Re: combination of Sandbox and Anonymize actions?
MacOtaku wrote:I guess the specific examples I chose were less than ideal. I selected them for familiarity, and because I am not aware of their currently having vulnerabilities of the class I brought up, which were meant hypothetically, not literally.
I think the general idea — to provide a means to allow only “safe” requests to passive resources without identifying headers — is still valuable. I know NS has heuristic XSS protection, based on sanitizing suspicious-looking text patterns preemptively, to prevent the consequences of input sanitation failures common to poorly-written sites. However, heuristics can fail, and there are other classes of exploits besides XSS.
I'll try a new example, but I'll stick to generalities this time, because I think it would be irresponsible for me to point out specific cases where this is currently possible: Let's say I'm logged into site X, which is vulnerable to CSRF. I also visit site Y, into which someone has inserted an exploit of the CSRF vulnerability in site X. (This could resemble an ordinary, benign request; it could just be a question of query string.) If the request to site X from site Y were to come with the Cookies (or Authorization) used by site X to authenticate my session, then the attack could succeed; but, if the third-party request to site X were to be Anonymized, and hence lack the authenticating Cookies (or Authorization), then the attack would (almost certainly) fail.
As another example: maybe site X also has JS which is permitted (necessarily, for the site to work), but which contains a logic error which is exploitable through a request which contains a fragment identifier containing Ajax parameters, of the same sort as are used in the normal operation of site X. (This again could resemble an ordinary, benign request; it could just be a manner of different parameter values.) If, on site Y, a frame is injected containing a page on site X with an exploitative fragment identifier, then since JS is permitted on site X, the exploit will cause will cause the script on site X to perform actions specified by the attack payload, such as generating malicious requests. If third-party requests to site X from site Y were to be Sandboxed, then the attack could not succeed, as the maliciously-inserted frame would contain a page on site X which (due to having been loaded by a third-party request) could not run JS.
In both these examples (even if both were attempted against the same site at once), if third-party requests to Site X were both Anonymized and Sandboxed, then both attacks would fail. Being able to so restrict third-party requests would block some classes of attack vectors altogether, without creating the need to allow sites to load images and other heavier-than-text assets from content distribution networks on a site-by-site basis.
Furthermore, NoScript's sophisticated InjectionChecker engine checks also all the requests started from whitelisted origins for suspicious patterns landing on different trusted sites: if a potential XSS attack is detected, even if coming from a trusted source, Anti-XSS filters are promptly triggered.
This feature can be tweaked by changing the value of the noscript.injectionCheck about:config preference as follows:
0 - never check
1 - check cross-site requests from temporary allowed sites
2 - check every cross-site request (default)
3 - check every request
NoScript also protects against most XSS Type 2 (Persistent) attacks: in facts, the exploited vulnerabilities usually impose space constraints, therefore the attacker is often forced to rely on the inclusion of external scripts or IFrames from origins which are already blocked by default.
Users browsing this forum: No registered users and 3 guests