Hacker News new | past | comments | ask | show | jobs | submit login

Do you have a recommendation to address #4? That seems like an intrinsic problem for web apps, see also ProtonMail.



Meta / WhatsApp have developed their own solution for the whatsapp web client (whatsapp is end-to-end-encrypted): https://engineering.fb.com/2022/03/10/security/code-verify/

it takes the form of a browser extension the user downloads that will tell the user if the javascript code is what it is expected to be. it checks this by verifying the code's expected hash with an endpoint hosted by Cloudflare. Whatsapp can publish new versions to Cloudflare but they can't modify them.

In this case it makes it so that you are trusting Cloudflare instead of just WhatsApp, but (as an amateur), I don't see why this couldn't be adapted into a standard that works with something like a blockchain or certificate authorities (or even something like a git host to go along with public source code auditing?). I think something like this should become a standard and be built into browsers, but currently not a lot of companies are using any solution at all.

The only other implementation of a solution to this that I found, which I think is pretty similar, is Etesync's pgp signed webpages library + browser extension (https://stosb.com/blog/signed-web-pages/), which allows the developer to PGP sign web pages so you know the code has not been modified by a malicious server without the developers approval. So maybe you can use that in your project I guess, or there are probably some other solutions that I haven't found

I think this problem might be called "Code Verification" in cryptography, if you want to look more into it


You're very right! Luckily, we can resolve the vulnerability in this instance, although it's a challenging problem to resolve in general webapps.

The technical explanation for our issue is that the client-side Javascript in our webapp is trusted. To quote the late Ross Anderson [0, pg. 13], "a trusted system or component is one whose failure can break the security policy." In this case, our security policy is that the server must not be capable of viewing our screenshots. Our goal is to make that trusted Javascript more trustworthy: that is, closer to a system that can't fail.

We're at an advantage in this case: there's an open-source application on GitHub with eyeballs[1] on it that users must run on their endpoint machines. Given that we already have source-available local code running, we could instead serve the UI from the local Go application and use CORS[2] to permit access to the remote server. If the local application is trustworthy, and we're only sending data (not fetching remote Javascript), then the local client UI is trustworthy and won't steal your keys. If users run binaries directly from 1fps (as opposed to building from source), then you would want some multi-party verification that those binaries correspond directly to the associated source [3].

Protonmail is almost surprising: it's supposed to be end-to-end encrypted, but it's not end-to-end encrypted in the presence of a malicious server. If, say, a government order compelled Protonmail to deploy a backdoor only when a particular client visited the site, most users would be unaffected and the likelihood of discovery would be low.

[0]: https://www.cl.cam.ac.uk/~rja14/book.html

[1]: https://en.wikipedia.org/wiki/Linus%27s_law

[2]: https://stackoverflow.com/a/45910902

[3]: https://en.wikipedia.org/wiki/Reproducible_builds


It seems the answer is "no" --- am I right to understand it that way?

Another attempt at compression: use a native app to serve Javascript for the web app so you don't have to trust any server

I don't mean to skip anything, it's just not clear to me how related some of it is, and if it's lengthy just because it is, or because I'm missing something thats very significant (ex. cite + name + page # to help say "something we trust is something we rely on", link to Wikipedia as source for "eyeballs make bugs shallow"


Links are just for reference, but the gist is: serve the webapp from the Go binary instead. The end-user already has to trust the Go binary, and if they need to they can look at the code once and confirm it's not vulnerable. I prefer this to browser extensions because the audit trail process from source to browser extension is less clear; even for open-source browser extensions, I still have to trust the author to not add code that isn't in the repository.


Apart from having a local binary / extension / some bookmark URI magic I don't think so.

A "lighter" /alternative to a local binary is to a have a local index.html and use SRI when linking to the remote scripts [0]. But seems clunky as well...

[0]: https://developer.mozilla.org/en-US/docs/Web/Security/Subres...




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: