Who proposed binary transparency for Gmail (the server)? The whole premise of tools like Signal is that enough happens in the client--which is amenable to binary transparency--that the server is immaterial.
Sure, if you attack that strawman sufficiently, the only alternative is running your own Sendmail. And yes, it could suck less.
My point was a bit bolder than that, though: not only does binary transparency (which, to be clear, is about client code, not the server itself) substantially mitigate the concerns you might have trusting a centralized service, but running a computing stack of millions of lines of code yourself does not at all mitigate those concerns.
Sendmail isn't "yours", even if it's open source. It's a bunch of random patches from random people, some of which will erroneously "fix" a Valgrind error and remove cryptographic entropy from the RNG, others of which will contain intentional backdoors, etc, etc. The fact that you're running it yourself in your closet just doesn't matter. Physical locality--and hardware ownership--don't make a difference.
But there's only one client. And, that's deliberate - Signal doesn't allow federation for philosophical/iteration speed/competitiveness reasons. So in practice, if the client changes you're out of luck. It's encryption that lasts as long as the service owner thinks it should last, but if it's going to be like that, they could also just promise not to look. The point of defining an adversary in cryptography is to stop them from doing things, not just noticing if they do.
Hence, we end up back to my point about threshold signatures. With a threshold software update system and a network of auditing firms, you can actually enforce good behavior on the adversary because they no longer fully control the client, and the client is treating the server as untrusted. This is a form of decentralization, albeit very different to the classical kind (p2p networks, multiple clients, federation etc). It should also be amenable to rapid iteration.
Totally agree that open source is often a bit of a black box and maybe nobody is auditing it properly. But if you do become aware of a bad change / back door, and the system is at least somewhat decentralized and a part of it is physically yours, you can just switch to a better implementation. No such luck with a centralized system: the client may be open source but if only the official client is allowed to connect to the servers it's irrelevant. And all E2E messengers today (except Telegram?) have such a policy.
> …there’s only one client…It’s encryption that lasts as long as the owner thinks it should last…
I think this is a big leap. It’s a open source client with verifiable builds. If a future version removes encryption, users will migrate to Wire, or Threema, or Matrix, or whatever. How does there being only one client make the encryption non-resistant to an adversary?
It’s genuinely unclear to me what you consider the difference between:
- Multiple open source clients, distributed as source or prebuilt binaries, possibly signed and verifiable, possibly downloaded over TLS, depending on what Linux distro you use. <— This is the state of the art for most open source software.
- A single open source client, developed in the open and distributed with verifiable builds where feasible via broadly reliable and unlikely-to-collude third-party distribution mechanisms (the app stores). <- This is how Signal works.
- A single open source client, developed in the open and distributed with verifiable builds everywhere. <— This would be nice, if the app stores would support it!
> The point of defining an adversary is…not just noticing if they do
Well, no, I disagree. As a prominent example, Certificate Transparency.
“Noticing”—aka accountability—is extremely useful. As I noted above, there is a multitude of free, secure Signal competitors. The only thing that keeps millions of users using Signal is trust based on accountability. Building better accountability mechanisms means that if Signal violates that trust, users can switch, before they discuss hiring hitmen/buying meth/overthrowing the government on an unencrypted channel.
> …threshold signatures…
TBH, I didn’t read this part of your post that closely. There is a lot of work being done on binary transparency, and your proposal seems broadly in the same direction, but I’m not following this space very closely.
What I don’t follow is why you think it requires multiple clients.
> …you can just switch to a better implementation…
Aha. And here’s your point—not that a single client thwarts encryption, but that a single client for a sufficiently sticky centralized service would reduce the ability of users (who may not care that much) to switch away, should the encryption be disabled.
And sure, I think this is true to a degree. But as Aol/Google Talk/WhatsApp-post-ToS-change can tell you, messaging is not that sticky!
Anyway, ultimately the market has spoken. Ping me when everyone is using XMPP/Matrix/whatever. In the meantime, I hope the Signal Foundation keep building usable secure products for people like me. When they sell out, I’ll switch to Threema.
You keep making arguments specific to Signal-the-app. I'm making arguments about E2E messenger encryption in general (and also servers, decentralization etc).
If Signal-the-app did a WhatsApp and started blocking forwarded messages, or blocking messages that contained a blacklisted domain etc, I think very little would happen and it might take a while for people to even notice. You're assuming it'd be noticed immediately and everyone would find out nearly as quick, but I see no evidence this is true. There is no organized effort to do this type of auditing, let alone make it sustainable for the long run. Even if there was, so what? By the time I somehow find out (how? reading HN?) my message history is already gone.
"What I don’t follow is why you think it requires multiple clients."
I don't ...
"TBH, I didn’t read this part of your post that closely"
... which might explain why you keep mis-characterizing my position. Please RTFA and then opine. The section on threshold signed auditable builds isn't even long or complicated, it's a few paragraphs at most.
With the proposed software update scheme you can at least get closer to the goal of a single-client-single-server but genuinely encrypted system, that meets the social goals people actually have for encryption. It's a form of decentralization that doesn't require everyone to re-write the same code several times, so should mostly dodge the product management and iteration speed issues that causes Signal to reject federation.
It isn't as good as having multiple clients because the auditors can only reject new changes, not actually improve the privacy of the product in the way that a competitive market of clients might allow, but the tradeoff is the central authority can add features more consistently as there's no client capability skew.
"Well, no, I disagree. As a prominent example, Certificate Transparency “Noticing”—aka accountability—is extremely useful."
CT is hardly used. I've never heard of a single incident in which a website operator found a bad cert by monitoring CT logs. If there are any such cases it is very likely to be a big tech firm and not a "normal" SSL user.
Regardless, CT is if anything a better analogy for my own proposal. The actionable outcome of a CA breach detected via CT is that browser makers revoke it and they stop being able to issue certificates. It happens behind the scenes and users don't have to think about it. Imagine if browser makers never revoked any CA and people just had to tell their friends that some padlock icons were trustworthy, others weren't and they shouldn't browse to websites that used the leaky CA. Good luck with that.
> If Signal-the-app did a WhatsApp and started blocking forwarded messages, or blocking messages that contained a blacklisted domain etc, I think very little would happen and it might take a while for people to even notice.
Why would it be different for, say, Sendmail? I mean, I agree, but isn’t this the heart of why “open source” is not a meaningful differentiator? (Taviso wrote something similar about reproducible builds and “bugdoors” here a while ago: https://blog.cmpxchg8b.com/2020/07/you-dont-need-reproducibl....)
My claim is not at all that it would be noticed immediately. My claim is that distributing source code doesn’t help for exactly the same reason.
> which might explain why you keep mis-characterizing my position. Please RTFA and then opine. The section on threshold signed auditable builds isn't even long or complicated, it's a few paragraphs at most.
It’s never stopped me before. ;)
More seriously: I don’t disagree with the proposal, but it strikes me as less radical than you suggest, for a few reasons:
- It seems to me you can already do this with signed git commits plus—what we talked about before—reproducible builds.
- You still have to incentivize review. For a significant project, yeah, you might get it (and it’s cheaper than the dev time to maintain multiple overlapping projects, I guess). But are you going to get it for every, I dunno, NPM package to make text a pretty color which turns out to be critical infrastructure? Nah.
- As Tavis points out in the above link, source code is not at all a panacea.
So, yes, if people want to do multiple-auditor-signed updates, I’m all for it. App stores should make it easier than they do today. Agreed?
> CT…
I guess I no longer understand what we’re disagreeing about. To be clear, I’m all in favor of enabling multiple auditors to sign binaries, I guess. I don’t think it solves major problems in most cases, nor do I think source code access is a panacea for supply chain attacks. But if people want to do it, sure, I guess?
Who proposed binary transparency for Gmail (the server)? The whole premise of tools like Signal is that enough happens in the client--which is amenable to binary transparency--that the server is immaterial.
Sure, if you attack that strawman sufficiently, the only alternative is running your own Sendmail. And yes, it could suck less.
My point was a bit bolder than that, though: not only does binary transparency (which, to be clear, is about client code, not the server itself) substantially mitigate the concerns you might have trusting a centralized service, but running a computing stack of millions of lines of code yourself does not at all mitigate those concerns.
Sendmail isn't "yours", even if it's open source. It's a bunch of random patches from random people, some of which will erroneously "fix" a Valgrind error and remove cryptographic entropy from the RNG, others of which will contain intentional backdoors, etc, etc. The fact that you're running it yourself in your closet just doesn't matter. Physical locality--and hardware ownership--don't make a difference.