- htmx
- Phoenix LiveView
- Hotwire
Essentially, these Javascript libraries exist to allow two things:
1. They allow event handlers to be bound to HTML elements — but without the ability of the app to specify any client-side event handling code. Instead, the app specifies (through HTML attributes) the data that should be collected from the DOM; this is submitted by the framework to the server backend; and the backend then responds with partial HTML, which replaces an app-specified (again, through HTML attributes) section of the DOM.
2. They allow the the server backend to push updates to the page to the client, without any client-side interaction. This is accomplished by the client library holding open a websocket to the server, which can push partial HTML at the client library, which will then apply the patch directly to the page's DOM.
Other than this, they don't do anything else. As long as there is no other JS loaded into the page to interfere with their operation, then having such a JS library loaded into the page, doesn't enable the page to do anything else, either.
It is my belief — correct me if I'm wrong! — that as long as the partial HTML getting "patched in" by these JS helper libraries is itself filtered through NoScript (as it already would be, given how NoScript works) — then there should be no security or privacy implications to modifying NoScript to allow these libraries, and only these libraries, by default. Even when "scripts" are disabled.
Motivation (use-case 1: regular web-browsing)
I use NoScript because I want to protect myself from malicious websites, advertising and analytics and browser fingerprinting, etc. But I don't have any ethical objection to "rich web-apps."
In general, I feel that we could get good UX from the web, without having to sacrifice security or privacy/anonymity. It shouldn't have to be a choice.
Currently, this is mostly impossible, as our only options with NoScript are either to disable all scripts, or to micromanage the enablement of scripts on a per-URL basis.
But "rich web-apps" are usually built as fat monoliths, with the fingerprinting/advertising/analytics integrated right into the same scripts that make the web-app function. And even if you can find a web-app that keeps its analytics out of its core scripts, an attacker who gains control of that web-app's backend can still just replace the script that lives at the "core script" URL — the one you already whitelisted in NoScript — with a malicious script at the same URL. And NoScript wouldn't notice!
These SSR helper libraries offer an alternative: they provide a very restricted client-side integration, which does not change often. Sites that use these SSR frameworks + helper libraries, as long as they use no other JS, are restricted to doing only good things. If NoScript can verify that a website is using only these SSR helper libraries, then NoScript can enable rich UX without enabling fingerprinting/advertising/analytics or malicious code.
This would lead to a virtuous cycle: with more websites developed that work and provide good UX under NoScript, more users would feel NoScript is reasonable to use; which would in turn mean that more web developers would look at their access stats and consider supporting NoScript. Which in this case would no longer mean having to code an alternate, entirely JS-less website for just these users — but rather, switching from a "fat JS client" implementation, to an "SSR framework with a framework-provided thin client-helper JS library" implementation, which would serve both regular and NoScript users equally well!
Motivation (use-case 2: Tor Browser)
Tor Browser includes NoScript. Tor Hidden Services are coded under the expectation that users not only have JS disabled, but also that they are not willing to enable Javascript — as arbitrary server-provided JS has high potential for de-anonymization/fingerprinting attacks.
This severely limits the user experience of Tor Hidden Services vs regular web services. (Have you used a Tor hidden-service "chatroom"? They suck! The whole content-frame must refresh at some interval, losing your scroll position.)
Developers are less likely to think it worthwhile to "port" their web service to a Tor hidden-service exposure, as they believe that, to actually serve Tor Browser users effectively, their website needs to be usable without JS enabled — and for some websites, this is too much of a challenge.
And this in turn makes Tor less powerful as an anonymity tool, as with fewer truly-benign web services, there are fewer reasons for governments and corporations to not just block Tor altogether.
If Tor Browser configured NoScript to allow "well-known scripts" under the default Safest security level, then you could have rich web-app experiences on Tor hidden-services, without subjecting Tor Browser users to fingerprinting/de-anonymization attacks.
Proposed Implementation
Split the NoScript "script" filtering component out into two components: "well-known scripts" vs "arbitrary scripts".
NoScript can (but doesn't have to) allow "well-known scripts" in the default profile.
NoScript would recognize a script as a "well-known script" not by its URL, but by its Subresource Integrity content-hash. (This means that only <script> tags with the integrity attribute set would be candidates for being considered a "well-known script.")
NoScript would maintain a (burned-in per version) set of "well-known script" content-hashes. If NoScript recognizes a <script>'s integrity hash as being in this set, then the script is considered a "well-known script", and filtered according to the current setting for "well-known scripts" rather than the setting for "arbitrary scripts."
Upon the release of new versions of these SSR helper libraries, the library developers — motivated, hopefully, by the desire to support NoScript users — would submit the new version of their script to NoScript, for validation as a "well-known script." (Sort of like how new versions of App Store apps are submitted to the App Store owner for validation.)
For incremental releases of a script, the validation process could use an automatic static-verification pass, done against the delta between the current and previous versions of the script, to ensure that no new browser-API surface has been added to the library since the last release. If this check passes, then the new script version likely doesn't need manual human verification before adding its hash to the burned-in list. (Which means this whole flow could be done in a Github Action — with the devs of a "well-known script" opening a PR that adds the script's content-hash and a canonical URI for it to a .js file; the CI action "testing" by fetching the .js file from the canonical URI, verifying it against the hash and validating its API surface against the (also fetched) previous release; and then auto-merging the PR if these checks succeed.)
For brand new scripts (not that much "growth" would be expected — there's not much proliferation of these SSR framework client-libraries!); or if the automated verification fails — then it'd be up to a human on the NoScript team to look at the script and decide if it's safe. The human validator on the NoScript side would be free to be as paranoid and lazy as they want with these scripts; they could reject scripts that are not easily validated, thereby pushing most of the work of making such scripts easily validated, onto the authors of these scripts. (Given the worldviews of the authors of these SSR frameworks, I think they'd be perfectly okay with this.)
Alternatives
Rather than implementing this functionality directly into the NoScript browser extension, I see a few other ways of accomplishing the same goals:
1. Create a separate browser extension that does the above. This extension would expect NoScript to be running, and would look for scripts that weren't loaded because of NoScript policy, but which fit its definition of "well-known scripts." It would then load these scripts into the page context.
IMHO, this approach is just the same thing with extra steps, for a worse end-user experience — browser users would need to manage two separate sets of policy, in two separate extensions, rather than "well-known scripts" being just another checkbox within NoScript.
2. Create a separate browser extension which ships with known-good versions of these SSR client libraries embedded into it. This extension would look for a specific HTTP response header (or HTML <meta http-equiv> equivalent) specifying an SSR framework by name + major client-library ABI version; and would respond by injecting its embedded copy of the appropriate client library into the page. (This is essentially a way to pretend that these SSR frameworks are base browser functionality.)
This approach is less secure, in that there's no way for users to turn off this injection.
This approach would also require that the embedded versions of the client library are both backward- and forward- ABI-compatible with arbitrary server-side versions of the web frameworks of the same major version. (Which would be annoying to the devs, but good for stability of the ABI, and thereby number of eyes that see+audit each ABI version and its client-library impls.)
This approach would move control over implementations of the client libraries to the developers of the extension, and/or to any browser that implements the functionality of the extension "in core" (think: Brave Browser's implementation of ad-blocking.) This could be good for security — proliferation of implementations usually is — but would also mean that there could be malicious or simply "leaky" implementations of the extension functionality that web backends could discover and exploit.
Personally, I don't think either of these approaches are better than just having NoScript have "well-known scripts" as an exception. Which is why I'm proposing what I'm proposing.
