Hacker News new | past | comments | ask | show | jobs | submit login

Because technically a "flaw" in the system, however impractical to be exploited, did exist and it would be correct to point to that fact. Discussion of if this reaction is overblown and if it would direct users to more dangerous methods of communication is where it stops being technical.

It could be that I am the one who is missing something. That's why we are discussing here, right? :)




Sure, but I think you might be missing an important aspect. The system is designed to not block communications upon key changes, just display an alert so the other user knows (And can verify out of band if they are about to discuss a sensitive topic). The initial "vulnerability" seemed to just be based on an opinion that it would be more secure if it did indeed block communications (until manual verification was performed) upon key changes.

You are correct that it is impractical to "exploit" the chosen design, I think the bigger issue their portrayal of this as a vulnerability rather than a security suggestion.


This is not the biggest flaw! It was that messages that are in flight (sent, but not yet delivered) still get delivered if the key changes. Only after delivery does the sender get notified of the key change. The last time, I gave example where a protest organizer (Bob) is detained by the (secret) police in the period leading up to a protest, and denied access to their password-protected phone that's switched off. All the police has to do is wait for a while, then pop the SIM into a new phone and presto, they get all the incriminating messages that were sent by co-conspirator Alice while Bob was detained. Only after delivery onto the police phone does Alice get notified that Bob's key has changed.


If I'm not mistaken, they called this a "flaw", not a "vulnerability", which definitely wouldn't have been correct. I personally would have preferred "trade-off" but I'm honestly not sure about the word "flaw".

My general point is, even if they made some technical errors as well, their consideration, I believe, was more about the consequences of their interpretation.


They called it a "backdoor", and reported it as a game-over vulnerability. Everything since then has been trying to walk the claim back.


> They called it a "backdoor", and reported it as a game-over vulnerability.

I didn't remember that and it seems to have been removed from the article (with a note on the end which I have missed). Well, that's just sensationalism and must have been picked up way earlier in a sane review process. I stand corrected.


"The original article – now amended and associated with the conclusions of this review – led to follow-up coverage, some of which sustained the wrong impression given at the outset. The most serious inaccuracy was a claim that WhatsApp had a “backdoor”, an intentional, secret way for third parties to read supposedly private messages. This claim was withdrawn within eight hours of initial publication online, but withdrawn incompletely. The story retained material predicated on the existence of a backdoor, including strongly expressed concerns about threats to freedom, betrayal of trust and benefits for governments which surveil. In effect, having dialled back the cause for alarm, the Guardian failed to dial back expressions of alarm."

From this linked article.


You can't separate the technical facts from the interpretation though, because they only brought it up to interpret it incorrectly.


The "flaw" was a choice of default preferences which are fundamentally a UX/security tradeoff: WhatsApp did not notify users about key change events by default.

Those who value maximizing security at all costs, even to UX, disliked this default.

WhatsApp was trying to switch end-to-end encryption to on-by-default for its over a billion users. An understandable requirement of doing so was to ship E2E encryption in such a way that did not involve changes to the UX.

Whether or not this tradeoff is the best compromise is certainly debatable, but it is just that: a tradeoff, not a "backdoor", not a "vulnerability", and not a "flaw". It's a deliberate design decision.

Why is it not a "backdoor", not a "vulnerability", or "flaw"? Well, let's examine what would happen if the default were reversed: let's say users were notified by default. Would this keep them more secure?

I have doubts. These notifications are not high signal or immediately actionable. They do not indicate an attack. They indicate "a key changed" and whether or not that's abnormal is up to users to determine. The overwhelming majority of these events will be innocuous, leading to alert fatigue. It's also unclear how well an average end user would even understand or be able to react to these alerts.

I am certainly willing to give WhatsApp's UX designers the benefit of the doubt here. UX design is hard and the tinfoil hat crowd saying things to the contrary have a history of producing unusable software by demanding security misfeatures which tick off a box on a threat model without actually improving user outcomes.

While some people in the tinfoil hat crowd seem to think bombarding users with a bunch of low-signal security alerts is a good idea, practitioners working with IDS/SIEM systems probably have a different opinion: that low quality / low signal alerts are worthless.

There are solutions to providing high-quality signals about key change events to users without asking them to manually confirm key fingerprints in person, but they are complicated, haven't been largely deployed, and it's still unclear how they'll work...

I'm talking about CONIKS and Google Key Transparency, which implement logs which users' own devices can monitor to discover changes to their keys as advertised through a key server. These systems can ask a very simple question to users when this happened: "Did you just log in on another device?" If they didn't, the user can select no and publish an alert indicating they were compromised.


The clever part is that the server doesn't know if the user has the setting on or off, so it can't cheat the security without risking detection.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: