Hacker News new | past | comments | ask | show | jobs | submit login
On Ghost Users and Messaging Backdoors (cryptographyengineering.com)
97 points by kandarpck on Dec 18, 2018 | hide | past | favorite | 55 comments



Apparently some researchers from the GCHQ in the UK are proposing that "secure" messaging systems like iMessage and WhatsApp which manage group chats centrally in a manner that bypasses the end to end encryption should:

- Add "ghost" users/devices to existing chats

- Suppress notifications of these additions to users

This would perpetuate a currently known bug in secure communication protocols, effectively turning it into "feature" for law enforcement.

Fortunately, systems like Signal and Briar have already moved past this flaw.


Keybase Teams—also featuring e2e-encrypted group chat—appears to be proof against the ghost-user-based attack.

It would be interesting to compare its implementation to how Signal does group messaging in TextSecure v2.

[0] https://keybase.io/blog/introducing-keybase-teams#anyway-tea...


Thanks for the mention! We designed Keybase with these exact attacks in mind.


What is need is open protocols. Users can write their own implementation if they do not want to use the existing ones. WhatsApp uses XMPP according to Wikipedia, so you could implement your own according to XMPP, I suppose. You might also just use your own programs and protocols. If the messages are encrypted with the recipient's key then nobody else can know what is the message, but only that there is a message. So, you can implement encryption on client the server does not need to know about it.


What does that mean, “manage centrally in a manner”... and how does Signal not manage it centrally?


Signal groups are managed by the client devices. The details are quite complicated, but some are documented here: https://signal.org/blog/private-groups/

Perhaps another user with stronger familiarity on the subject can expand on this (ELI5 would be great!).


Signal client is centrally managed and can be updated for every user. And it's quite ridiculous claim that Signal can't implement a backdoor in the client because of some arbitrary design choice.


Yeah security is not really "proveable" in any software system.

However, a few points to consider:

1. The signal server can't "see" the group. Clients are just sending N messages to everyone in the group with some encrypted metadata that says it's a group message.

2. The client app is open source. You can go look for a ghost user or backdoor mechanism yourself.

3. The build is reproduceable. You can build it yourself and sideload your own APK, or compare it to the APK coming from the play store.

I don't think it's impossible to put a backdoor in, but I think it at least makes vigilance a good defense. Smart serious people are paying attention.


Moxie and Whispersystems have got some interesting thinking around using Intel's SGX:

"Modern Intel chips support a feature called Software Guard Extensions (SGX). SGX allows applications to provision a “secure enclave” that is isolated from the host operating system and kernel, similar to technologies like ARM’s TrustZone. SGX enclaves also support a feature called remote attestation. Remote attestation provides a cryptographic guarantee of the code that is running in a remote enclave over a network.

Originally designed for DRM applications, most SGX examples imagine an SGX enclave running on a client. This would allow a server to stream media content to a client enclave with the assurance that the client software requesting the media is the “authentic” software that will play the media only once, instead of custom software that reverse engineered the network API call and will publish the media as a torrent instead.

However, we can invert the traditional SGX relationship to run a secure enclave on the server. An SGX enclave on the server-side would enable a service to perform computations on encrypted client data without learning the content of the data or the result of the computation."

https://signal.org/blog/private-contact-discovery/#trust-but...

I don't know if they've got that in production yet - and I don't know just how strong the "cryptographic guarantee" of the secure enclave code is, but the fact that they're trying it fills me with joy...



Binaries are not opaque gibberish, it is possible to analyze them.

And of course for major apps, there are people doing so.


Are these analysis efforts publically viewable?


> The build is reproduceable. You can build it yourself and sideload your own APK, or compare it to the APK

Have you tried this? Most people seems content that there is some source available and trust the binary. That may not be an option for everyone.


I have tried it.

There's a non-zero number of people around the world who check the builds. Your security rests on the difficulty of feeding you a subverted APK without feeding any of them the same APK.


Would you mind sharing how?


In order to get a subverted APK onto your phone, that APK has to be created, a set of devices that includes you must be defined, the APK must be delivered to them, and those phones must accept the APK as genuine. Right? If the software that now runs on those phones reject the APK as being signed by the wrong developer, or something else, then the game is up. But let's assume that the developer has some way to install software despite the signature-checking that your device runs.

If the attacker can identify your devices 100% precisely, just one device, then the rest doesn't really matter. But if the attacker has incomplete information or a coarse attack vector, then others must be attacked along with you. For example, if the attack works by putting a subverted APK on one or more CDN nodes, then everyone else in your geographic area gets the APK along with you.

If there's one person who gets the subverted APK and checks it against the original, the attacker's attack is public. If there's one person who automatically uploads all new installed APKs to apkmirror.com, then the attacker's attack is public. See?

There is (AFAICT) no single list of people who would discover the attack, and who therefore must be avoided by the attacker.

Now, if the attacker is willing to have the attack revealed a day after it happens, this may be acceptable. But otherwise, the attacker has to find a way to target you and avoid any false positives who might do that checking.


Right, this may or may not be relevant to your threat model, but isn't really helpful information for someone looking to build the software reproducibly. Would you mind sharing sharing how you did it?


Oh, building it reproducibly? That's the default. You just run a new-enough version of gradle; build.gradle is set up already. There's a tool called apkdiff to compare everything except the signatures.

https://github.com/signalapp/Signal-Android/wiki/Reproducibl... is a thorough recipe, but I didn't actually do all of that. I had the right build environment anyway.


Yes but they would have to alter the clients code - this would be reflected in the source and comparing hashes of the binary (as Signal would have to keep that code change out of the public repo). So yes, the design of the system quite effectively prevents a client side backdoor from going unnoticed.

Edit: also, this is a deliberate design choice not an arbitrary one.


Interesting. Compare https://safenetworkprimer.com/ by the way


Signal is great and all but does it scale? If half a billion people sign up tomorrow can they cope? How deep are their coffers?


> Signal is great and all but does it scale? If half a billion people sign up tomorrow can they cope? How deep are their coffers?

At least $50 million, courtesy of a WhatsApp founder: https://www.wired.com/story/signal-foundation-whatsapp-brian...


"Ghost users" are also a class of protocol and UX bug in secure messengers that are worth looking for; you will find secure chat programs where it's possible to add a member to a group with a very strong chance of not alerting other members of the group, whereupon the E2E encryption scheme of the system does all the work of decrypting the messages for you.


Instead of creating a ghost user account and attempt to join a chat, why not just copy they key of one of the participants?


In a "properly designed system", the service only ever sees public keys not private keys.


If the point of a law is to circumvent encryption you shouldn't be surprised that it doesn't satisfy anyone who wants the encryption to be safe.

Either the backdoor works and the system is bad. Or the backoor doesn't work and the system is illegal. At least if the law doesn't have a loophole.

So not sure why the article complains about the design whereas the intent and goal are the real issue. Seems like the design works as intended.


Consider that the article may be written for people on the law enforcement/policy side of the debate.


Not sure what that changes. If I was law enforcement / pro backdoor I'd say that this article supports my view.

The main complaint seems to be that Over time what seems like a “modest proposal” could lead us to world where GCHQ becomes the ultimate architect of Apple and Facebook’s communication systems.

But that is of course inevitable if the proposal is to be successful. What other solution would the author propose?


With the spread of misinformation and rage using messaging apps that have literally resulted in people getting killed by mobs (see for example https://www.nytimes.com/interactive/2018/07/18/technology/wh...) maybe we should re-evaluate our belief that making it impossible for governments to see what is spreading through messaging apps is an unmitigated good?


The problem is this situation is all or nothing. You can't break the encryption only for criminals, its either secure for everyone or its broken for everyone and not just in a way the government can exploit. Hackers will find a way to exploit it as well. Can you imagine the damage if the worlds IM history was leaked. This is probably the most sensitive data in the world.

Allowing governments access to IM history gives them way more power than they ever had. This is not just restoring lost powers. Never has to government been able to see your entire history of every conversation going back for years. I think most of us would be ok if it were actually possible to create a system where only the government after going through proper court process was able to intercept the messages from that time and forward but currently there is no good proposed solution and very dangerous laws like those in australia are being approved.


How people use a tool is not the fault of the tool - there is an underlying issue that drives that behavior. It would be like mandating that hammers have to be soft enough that they can't damage a skull because people use them to bash in peoples heads, which yes would prevent hammers from being used as weapons but would render them ineffective at their original purpose.


I don't think that's entirely true, sometimes tools have only one purpose. It would not be ethical to manufacture nukes and sell them to people, for example.

Even with a messaging app, imagine that you created a new one, and then found that for some reason 90% of your user base is hitmen communicating with their clients. Maybe that's not your fault, but I think you would be ethically obligated to shut it down, or significantly modify it to stop enabling hitmen.

Obviously these are contrived examples, and often in real life it's impossible to make a tool that can't be used for evil. But I don't think you're devoid of responsibility just because you didn't intend for your creation to be abused. If you accidentally created something dangerous, you have an obligation to take reasonable measures to mitigate the danger.


I think a large factor in this is the range of intended uses - in the example of a nuke, it can only be used for one thing which is evil, and so there is no downside to banning it or mandating changes to the properties inherent to its existence. But tools like private messaging and hammers have a huge potential for being used for good (due to the same properties that make them useful for evil) and targeting their properties to reduce viability for evil also reduces the amount of good they can do.

All that being said, I do agree that in some cases there is a definite ethical burden on a creator to consider the impact of his creation - I just think that in many cases the best solution is not to change the tool to avoid misuse but to figure out why the misuse occurs/would occur in the first place and try to solve that. I would conjecture that the misuse more often than not points to a deeper social issue that is for some reason not being properly dealt with but which is actually a really big deal that no one wants to confront. I can think of a few examples but I think that level of exploration may be better suited to a blog post than a comment.


>With the spread of misinformation and rage using messaging apps that have literally resulted in people getting killed by mobs (see for example https://www.nytimes.com/interactive/2018/07/18/technology/wh...) maybe we should re-evaluate our belief that making it impossible for governments to see what is spreading through messaging apps is an unmitigated good?

People get killed by mobs in China[1] as well, a country which backdoors all major social networks (Weibo etc) + at network edge (great firewall).

https://en.wikipedia.org/wiki/Human_flesh_search_engine


What makes you think these "mobs" used WhatsApp for its security reasons? I'd expect it's far more likely they use WhatsApp because it's the most popular messenger in that part of the world.


People aren’t dumb.

Remember Nextel direct connect? It was known that those communications weren’t tappable initially, and for a time every street level drug salesman had them.


The way WhatsApp was used in e.g. Myanmar included group chats involving dozens of random acquaintances - basically the equivalent of gathering in a town square to gossip. Needless to say, there isn't any security to speak of in such an arrangement. Nor did anyone ever complain that WhatsApp encryption is the obstacle. It's the part where everybody can gossip to everybody, with transparent scaling, that enables flash mobs - and it could just as well be a plain text SMS otherwise.


How would the ability to intercept everyone’s communications have mitigated that incident?


And how many people need to be killed by authoritarian states or rogue elements in the government of states with totalitarian powers before we consider a possible occasional death due to lack of surveillance acceptable?

I mean, yeah, sure it's something to consider. But it's not exactly like too much surveillance hasn't ever killed anyone.


So we should just go ahead and put Orwell's Telescreens in everyone's house so the governments can see what we're plot^h^hannng all the time?

(Looks around the office and sees the Echo and Google Home, remembers how many friends have those and/or Samsung "Smart TVs" in their home, or who have their phones constantly listening for "OK Google" or "Hey Siri". Right. As you were...)


No.

The state should be able to get a warrant to intercept communications for reasonable cause, and the accused should be able to litigate the validity of the search.


Unfortunately, in the world of crypto, if the state can intercept communications, then it's equally possible that a determined attacker can. Encryption either works for everyone, or it doesn't.

Governments are more powerful actors on this playing field, but they are far from the only ones on the field.

A warrant doesn't expose something your saying to the government, it exposes what you're saying to anyone who might be listening.

We're not talking the equivalent of handing some documents over to the police, we're saying the police are ordering us to stick the documents to the window of our house so that they can see them.


That's a little hard to do in a world where this kind of thing is done by National Security Letter.


You may notice that I didn’t include any advocacy for NSL in my comment.

Pretending that technology is infallible is a defect equally as bad as NSL with respect to the rule of law.

Obviously no complex IT system is infallible, and now we have a situation that with no legal remedy, the police and intelligence services are compromising or allowing latent defects to remain in order to fulfill their missions.

You never hear about law enforcement concerns with respect to iMessage or Signal, so it is likely that whatever security you think you have from the state is not meaningfully there.


I'm pretty sure NSLs are only enforceable in the US.


Plenty of other countries have similar mechanisms, where the user is also not informed (because of a court order or similar). The US is certainly not the only culprit here, although they may or may not be the worst.


Australian resident with a UK passport checking in here. Australian's are fucked too. And we pretty much copy/pasted our new laws from the UK ones, so UK residents are as well. If Canada/New Zealand have not already passed equivalent laws or are not in the process of doing so, my paranoia about Five Eyes might be a little miscalibrated. But realistically, I suspect its more likely that I'm not paranoid enough, rather than too paranoid...


How will the accused know about the search when they aren’t told about it?


I agree. There’s often an extreme point of view here with respect to this.


Meta, but this parent comment is a good example of the mis-use of downvoting on HN.

If you don't agree with the poster then engage in debate, don't just click the little arrow to try to grey it into oblivion.

We're all here to learn. Why not learn how to challenge this fairly widely-held view. Imagine you're talking with a 'normal' at a Christmas party and they say that; do you just stomp away singing LALALA?


It's possible this got downvoted because it is seen not as a misunderstanding of reality to be corrected, but in fact a harmful meme that the original poster doesn't actually believe themselves. Viewed through that lens, it certainly does deserve to be downvoted and it does not deserve to be interacted with. Not that I advocate either action.


Maybe we should consider that goods can still be worthwhile despite their mitigations.


I agree, but when you do that, you need to actually make an accounting of the costs/benefits. If you look among programmers and security specialists on say HN there is not even a debate or discussion about this, but rather an absolutist position that this is good and that the only reason to think this is bad is if you are a totalitarian government wanting to oppress your people.


I think you're conflating two different positions, which do admittedly co-occur in many people: 1. The technical, that any such "backdoor" is necessarily a backdoor, with all that implies, and thus to be eschewed on a "fundamental principles of good security" basis, and 2. The moral, that any such backdoor is crime against humanity, or whatever, because some of the people who have the technical capability will be leveraging it in order to oppress, and all of them will be doing so in order to act in a manner contrary to the user's interests.

Who do our tools serve? Is it just that they should be made to serve someone else, against us? Where, exactly, is the line on one side of which it's justified, but on the other it's abuse? How do you build a system that prevents abusive uses, but allows appropriate ones?

Decrying absolutist positions is all well and good, but it is a nigh-on tautology-level truth that a system with a flaw or backdoor, will be exploited — usually in multiple ways, and well beyond any potentially intended such.


Yeah yeah, but ultimately this all comes down to whether you think government interception of private messages is ultimately better than the messages staying private. People here obviously don't feel the same way you do about that.

Myself, I'm not quite so convinced as you seem to be of the harm of "absolutist" positions. Maybe the truth isn't always somewhere in the middle




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: