Hacker News new | past | comments | ask | show | jobs | submit login

Signal groups are managed by the client devices. The details are quite complicated, but some are documented here: https://signal.org/blog/private-groups/

Perhaps another user with stronger familiarity on the subject can expand on this (ELI5 would be great!).




Signal client is centrally managed and can be updated for every user. And it's quite ridiculous claim that Signal can't implement a backdoor in the client because of some arbitrary design choice.


Yeah security is not really "proveable" in any software system.

However, a few points to consider:

1. The signal server can't "see" the group. Clients are just sending N messages to everyone in the group with some encrypted metadata that says it's a group message.

2. The client app is open source. You can go look for a ghost user or backdoor mechanism yourself.

3. The build is reproduceable. You can build it yourself and sideload your own APK, or compare it to the APK coming from the play store.

I don't think it's impossible to put a backdoor in, but I think it at least makes vigilance a good defense. Smart serious people are paying attention.


Moxie and Whispersystems have got some interesting thinking around using Intel's SGX:

"Modern Intel chips support a feature called Software Guard Extensions (SGX). SGX allows applications to provision a “secure enclave” that is isolated from the host operating system and kernel, similar to technologies like ARM’s TrustZone. SGX enclaves also support a feature called remote attestation. Remote attestation provides a cryptographic guarantee of the code that is running in a remote enclave over a network.

Originally designed for DRM applications, most SGX examples imagine an SGX enclave running on a client. This would allow a server to stream media content to a client enclave with the assurance that the client software requesting the media is the “authentic” software that will play the media only once, instead of custom software that reverse engineered the network API call and will publish the media as a torrent instead.

However, we can invert the traditional SGX relationship to run a secure enclave on the server. An SGX enclave on the server-side would enable a service to perform computations on encrypted client data without learning the content of the data or the result of the computation."

https://signal.org/blog/private-contact-discovery/#trust-but...

I don't know if they've got that in production yet - and I don't know just how strong the "cryptographic guarantee" of the secure enclave code is, but the fact that they're trying it fills me with joy...



Binaries are not opaque gibberish, it is possible to analyze them.

And of course for major apps, there are people doing so.


Are these analysis efforts publically viewable?


> The build is reproduceable. You can build it yourself and sideload your own APK, or compare it to the APK

Have you tried this? Most people seems content that there is some source available and trust the binary. That may not be an option for everyone.


I have tried it.

There's a non-zero number of people around the world who check the builds. Your security rests on the difficulty of feeding you a subverted APK without feeding any of them the same APK.


Would you mind sharing how?


In order to get a subverted APK onto your phone, that APK has to be created, a set of devices that includes you must be defined, the APK must be delivered to them, and those phones must accept the APK as genuine. Right? If the software that now runs on those phones reject the APK as being signed by the wrong developer, or something else, then the game is up. But let's assume that the developer has some way to install software despite the signature-checking that your device runs.

If the attacker can identify your devices 100% precisely, just one device, then the rest doesn't really matter. But if the attacker has incomplete information or a coarse attack vector, then others must be attacked along with you. For example, if the attack works by putting a subverted APK on one or more CDN nodes, then everyone else in your geographic area gets the APK along with you.

If there's one person who gets the subverted APK and checks it against the original, the attacker's attack is public. If there's one person who automatically uploads all new installed APKs to apkmirror.com, then the attacker's attack is public. See?

There is (AFAICT) no single list of people who would discover the attack, and who therefore must be avoided by the attacker.

Now, if the attacker is willing to have the attack revealed a day after it happens, this may be acceptable. But otherwise, the attacker has to find a way to target you and avoid any false positives who might do that checking.


Right, this may or may not be relevant to your threat model, but isn't really helpful information for someone looking to build the software reproducibly. Would you mind sharing sharing how you did it?


Oh, building it reproducibly? That's the default. You just run a new-enough version of gradle; build.gradle is set up already. There's a tool called apkdiff to compare everything except the signatures.

https://github.com/signalapp/Signal-Android/wiki/Reproducibl... is a thorough recipe, but I didn't actually do all of that. I had the right build environment anyway.


Yes but they would have to alter the clients code - this would be reflected in the source and comparing hashes of the binary (as Signal would have to keep that code change out of the public repo). So yes, the design of the system quite effectively prevents a client side backdoor from going unnoticed.

Edit: also, this is a deliberate design choice not an arbitrary one.


Interesting. Compare https://safenetworkprimer.com/ by the way




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: