We were surprised to read this story and are not aware of discussions that would force us to change our product.
We believe people have a fundamental right to have private conversations. End-to-end encryption protects that right for over a billion people every day.
We will always oppose government attempts to build backdoors because they would weaken the security of everyone who uses WhatsApp including governments themselves. In times like these we must stand up both for the security and the privacy of our users everywhere. We will continue do so.
Warrant canaries haven't been tested in court. (They have been used as notification of an NSL though.) In particular, judges are human beings, not robots, so the laws are interpreted and implemented by humans. Because they're not robots, removing a warrant canary toes the line on communicating to the affected, in violation of an NSL. Thus, removal of the canary most likely means an NSL was received, but the canary staying up doesn't necessarily mean that there wasn't an NSL. Lawyers at every organization have considered the situation and advised their client, but those lawyers are not at the FBI.
I don't think you're meant to remove warrant canaries when you get a secret court order, you're just meant to continue renewing your warrant canary at a regular interval as long as you don't get a secret court order.
My understanding is that they can prevent you from removing warrant canaries but they can't force you to continue announcing "I have not received a secret warrant".
But the FBI could advise the canary poster that not continuing to post the canary notice could lead to legal action (esp. since that person has willfully put him/herself into the situation). Then it would be up to the recipient of the NSL to decide if it’s worth that risk, which is as stated above, untested. It’s a fine line between telling them to lie versus telling them the ruse could be in violation of the gag order.
Conceptually you can't because you can put as a citizen any requirement to make a canary that the law cant compel you to do. For example, you can pay to publish the canary: the state can't compel you to spend money on it. Or you can make a small petty crime with it (say, an IP Violation).
In places where there are limits to what the government can do to an with you, its possible to resist.
Why are you arguing about what is conceptually possible? The reality is such that people can absolutely be compelled to lie in public, especially for "national security" means. It happens all the time. Failing to update could signal something, but continuing to update means nothing. "Resistance" and other such concepts don't hold up to scrutiny against shareholders and 40 year sentences.
The constitutionality of whether the US Government can force someone to update a warrant canary has never been tested. Until it is, it’s foolish to declare with certainty whether it is or is not legal. We can only speculate at best.
We know that the legal bar for forcing someone to speak or not speak is high (compelling state interest), but national security has usually been held to pass such a bar.
If you can be compelled to be silent while breaking constitutional guidelines on the basis of national security, you can be compelled to update a beacon.
Warrant canaries are nice to have, but viewing them as something which provides proof of absence of government meddling is incorrect.
Perhaps, but that legal theory has never been tested in US federal court (as far as we know). It's entirely possible that the judicial branch wouldn't allow the executive branch to force private citizens into actively making false statements.
I've often wondered whether or not a sufficiently well worded warrant could require that the warrant canary remains published unchanged, rendering the warrant canary useless.
The idea behind the canaries is that they expire, and that one cannot legally force someone to sign false statements. So if no new canary is published when the old one expires, that's a red flag.
If so, sounds like someone from FB Legal and the SEC should have Words with Bloomberg. Wouldn't be the first time they've intentionally maliciously misrepresented/lied about an infosec issue to the detriment of a company in order to move the market (Supermicro "grain of rice"...)
Ha, at first I thought you were making a joke that WhatsApp could have been aware of the conversations by ... eavesdropping on the government employees who are using WhatsApp.
You don't have to take our word on this -- I wouldn't want you to. As others on this thread have pointed out it's possible enough to tear through our binaries that if we did have a backdoor it would be discovered.
> it's possible enough to tear through our binaries
No, it's not "possible enough" and I strongly suspect you fully realize that.
A backdoor doesn't need to be in a form of an IF statement or something comparably obvious and silly. It can be a weakly seeded PRNG that would allow a "determined party" to brute-force the key exchange in a reasonable time. That would take man-years to fish out from a binary, and that's without considering that you may (be forced to) distribute an altered binary on demand and to specific targets only.
So in the end all we have - realistically - is in fact just your word. There's no way for you to prove that you are trustworthy by pointing at some random binary. The only option is to distribute reproducible builds from an audited open source.
Distributing an altered binary to specific targets should be impossible as WhatsApp don't control the distribution, Apple and Google do. They would also have to be complicit too for a targeted attack to be feasible.
By having to distribute the same binary to everyone it is much harder to conceal a backdoor
Are you sure that there's no way for whatsapp to download and execute some code which will lead to upload of protected information?
Simple example: I'm sure that whatsapp main window is webview. Imagine that application inserts some kind of resource (e.g. CSS) from whatsapp server. So now whatsapp server can serve slightly altered CSS which will leak secret data via custom fonts, etc and you won't be able to find that, unless you're intercepting all traffic and can decrypt it (and apps nowadays love to pin certificates).
This is imaginary attack, I have no idea whether whatsapp does that. But HTML is a powerful and dangerous beast, yet it's used a lot in applications for rich media.
I agree. The crypto used is industry standard, and the actual process all the way from random number generation to deriving a key is relatively easy to follow.
Active ways to attack the client to make it leak the key are far more worrying - but even an open source project wouldn't protect against that.
Again, it may very well be a vanilla TLS, but then you have a bit of code in some obscure corner that repoints random() to an alternative weaker implementation when some conditions are met, including, for example, not being run under a debugger and not having certain popular functions trampolined.
Good luck finding even this without a fine comb. And that's us just getting started with code flow obfuscation.
Unfortunately, the WhatsApp terms of service say you must not "reverse engineer, alter, modify, create derivative works from, decompile, or extract code from our Services"
Of course if WhatsApp detected an abnormal or tampered version of the app, they can suspend or disable your account. I'm sure security labs that do reverse engineering of this sort probably do it on test handsets with burner numbers and identities so it wouldn't affect any personal accounts they use.
Perhaps, I just thought it was an odd thing for the head of WhatsApp to say: You don't have to take our word on this - just do this thing that we prohibit in our terms of service.
This should be completely believable for a company that relies heavily on user and community trust.
That said, @wcathcart: in community with deep technical expertise like Hacker News, folks do consider how many possible channels and means there are to confidentially leak information from applications.
You're correct that in the general case it's likely that tech-savvy users would scan a popular app like yours and find any 'obviously-placed' backdoors. It's an observational and opportunistic approach, akin to the way a passer-by might spot a poorly locked bicycle on a street.
Unfortunately there's an extra level of complexity here - any app may have unusual behaviors that a sophisticated attacker could trigger for individual users to exploit them - and it's really, really hard for the security-conscious of us -- who might never see or meet those users -- to truly trust that your app is doing what you tell us it is, whether that's end-to-end encryption in all situations, or anything else.
The reason is that without being able to see how the app is written, verify that it's genuinely the same compiled version running on all devices, and audit the behavior it will have under exceptional circumstances -- external observers just don't know.
I'm not expecting you to make the source freely available, incredible though that would be - but attempting to explain the potential disconnect in dialogue you might find with some commentors.
I'm not sure it wasn't answered before, but why do you refuse to open-source the client app, since, as you say it yourself, you try to have no secrets on the client-side, encryption is supposed to be e2e, technology is well known and implemented in many alternatives and basically there seems to be nothing to protect in the app itself?
We now have explicit, written authorization from the head of WhatsApp to reverse engineer ("tear through") the binaries. The ToS only prohibits unauthorized reverse engineering. I agree with you that it was disallowed prior to this comment, but I think it's OK now.
Thanks for your words, but unfortunately I think your hands are tied on this one. Australia was the first pin to fall within then Five Eyes, and I think the rest will soon follow.
Perhaps, but that legal theory has never been tested in US federal court (as far as we know). It's entirely possible that the judicial branch wouldn't allow the executive branch to force private citizens into actively making false statements.
It's true in every country. We are a global service, and our policy on backdoors is the same everywhere: we do not have them and we vigorously oppose them.
Sometimes this leads to us being blocked. We were blocked in Brazil, for example, but that block was overturned in the courts.
Thanks! What is your opinion on a rumor that FSB doesn’t have any complaints because they found unintentional/unknown vulnerability that allows them to read WhatsApp messages? Should WhatsApp users be concerned about that?
We do know that phones and tablets are vulnerable - so it's not like we're unaware of any backdoor that may also be used to subvert whatsapp.
It'd indeed be interesting to know if the FSB had some kind of baseband vulnerability that they'd used willy-nilly to facilitate dragnet surveillance.
I suspect William Binney was right though - blanket surveillance is just expensive and hides your needles in a mountain of hay; you really want high quality in the data you store in order to ease extraction of meaningful information / intelligence.
(that's not to say that aggregate meta data isn't interesting - just that with actual content noise is a problem)
Will not. We are completely opposed to this. Backdoors are a horrible idea and any government who suggests them is proposing weakening the security and privacy of everyone.
Facebook give the government this and the government in acts regulations to “protect” Facebook. I’m sure Facebook is salivating at the thought at getting even more access to your sensitive data. Once the backdoor is installed who knows who’ll have access.
As much as would like to believe all promises coming from corporate execs - Facebook has been caught lying more than enough. So thanks for trying, but I have uninstalled WhatsApp and I'm happy with Threema.
Have you considered Riot (Matrix) or Signal? Both are open source so it's possible to verify claims made on their website, which is a lot less possible with proprietary software like Threema.
And with Matrix apps you can choose to run your own server. Not sure what legal ramifications that has, but practically speaking it allows the possibility of eliminating another potential weakness.
These vulnerabilities would then at least be bespoke, particular to a specific server, preventing mass surveillance. At least if you're not talking about potential vulnerabilities in synapse (Matrix server software), but then a strong security team wouldn't help that much.
If the source code isn't available for audit by 3rd parties (or yourself), and you can't build it from source, then it was never really "secure" anyway. What lawmakers do or don't say is just noise.
Platforms that rely on trust (in this case, trusting that FB isn't doing bad things) provide very weak guarantees about privacy/security. They could easily include a keylogger in WhatsApp and bypass the e2e encryption, for example, and us regular folk have no way of knowing.
> If the source code isn't available for audit by 3rd parties (or yourself), and you can't build it from source, then it was never really "secure" anyway. What lawmakers do or don't say is just noise.
Careful - you're right that WhatsApp is untrustworthy, but laws that force them to add backdoors could well be applied to open-source code as well. Or make possession of non-backdoored software, open or not, illegal. Or compel OS/hardware manufacturers on which the code runs. The law is a dangerous thing to ignore.
If I can compile audited code from source myself, without any backdoors, then I can be reasonably assured there aren't any backdoors (excluding perhaps hardware level backdoors--but that's why we do the encryption in software).
Implementing hardware backdoors that are opaque to end users is theoretically possible, but more difficult in practice. You could, for example, build a screen/monitor that just captures everything on the screen and forwards it to some other entity, but in practice it's not so easy because of bandwidth limitations, etc. I suppose it would be much easier to create a physical keyboard that phones home over a mobile network, although it would only give you half the conversation.
> If I can compile the code from source myself, without any backdoors, then I can be reasonably assured there aren't any backdoors
Where does that leave the rest of society? Having open source software and hardware is not enough, we also need laws that prohibit mass surveillance and support our efforts to uphold human rights.
Relying on laws leaves a lot of wiggle room for bad actors, slippery slopes, and political opinions changing over time. Laws are based on trust in institutions (do you _really_ trust large governments?).
Laws are probabilistic, whereas math & source code is deterministic. You can verify that computer code does what it says it does. Laws depend on enforcement and complicated judicial systems (based on humans) to interpret and apply the laws, which means they can effectively change over time, and the goalposts are never stationary.
I agree that laws are not enough, independent verification must be possible. But your right to use secure software, and to audit it without risking to spend your life in prison or being killed, is ensured by laws.
This is why moving the goalposts and further normalizing surveillance is extremely dangerous. The rights that you enjoy today are not universal, and can obviously be eradicated in less than a generation.
There is no deterministic, technological solution to the problem that all technological solutions can be banned and its users threatened with draconian punishment.
There is no mathematical escape hatch from society. All we have is a messy assortment of technological mitigations that change the cost of surveillance.
These mitigations work best in combination with constitutional rights that limit what the government of the day can do, triggered by the latest outrage in the news.
Yes, and we should strive to use all tools for a defense in depth of our rights. Laws are the first line of defense, then politics/media, then technology from software to hardware, and finally trust in our fellow humans and ourselves. That way if one (temporarily) fails, we can fall back to the others while repairing the breach.
Auditing a large codebase is also probabilistic. Oversights happen, and there are ways to write code that looks like it does the intended behavior while also doing a second, nefarious thing. See https://www.ioccc.org
the backdoor could be in the hardware - the logical conclusion of your position is that we should all fabricate our own computers from scratch so that we're sure that they're secure.
This is clearly a straw man, no one wants to do this, or is suggesting that we do this. But at some point, even the most hardened OSS advocate has to trust someone (usually the hardware manufacturer). You cannot verify that the device you're on doesn't spy on you, you have to rely on the manufacturer's word that it doesn't. And the manufacturer's suppliers, of course, because the manufacturer is trusting them.
Somewhere along the stack, we all have to draw a line and say "beyond this point, I trust that I am not being spied on". You choose to draw that line at the hardware point. Others choose to draw the line at the software point.
That's a weird way too see things. You are pretty much saying that wearing a bulletproof vest won't save you from the bullet, but laws forbidding to kill people somehow magically will. Well, I suppose you also could literally say that and I kinda can see where are you coming from, but I think it's somewhat delusional.
Proper law enforcement (mind the "enforcement" part) can make a neighborhood safe enough so that nobody tries to shoot you and you won't need to wear vests. But in the end of the day, if you are messing with bad people in the bad neighborhood: bulletproof vests are real, laws are not.
And when you are talking about government enforcing the laws that are supposed to forbid uncontrollable government agencies to do what they do: well, government kinda is bad people in the bad neighborhood.
The most concerning thing about mass surveillance and mass manipulation isn't the direct impact to some hypothetical enemy of the state (although that is concerning, people in that situation are more or less a lost cause), it's that public discourse and democracy can be pushed around by some small controlling group.
>If I can compile the code from source myself, without any backdoors, then I can be reasonably assured there aren't any backdoors (excluding perhaps hardware level backdoors--but that's why we do the encryption in software).
Not necessarily. Have you ever heard of Ken Thompson's backdoored C compiler?
>When compiling its own binary, the compiler must compile these flaws
>When compiling some other preselected code (login function) it must compile some arbitrary backdoor
>Thus, the compiler works normally - when it compiles a login script or similar, it can create a security backdoor, and when it compiles newer versions of itself in the future, it retains the previous flaws - and the flaws will only exist in the compiler binary so are extremely difficult to detect.
It's not necessarily a viable attack method today, but it's the lesson behind it that's important. Anything can be compromised.
Source code availability isn’t really a solution to the trust problem. Sure, it allows for an audit, but the practical truth is that few people are qualified to perform those audits and few of them have a sufficient incentive to spend their time doing so.
So you still just invest trust in the maintainer or — if you’re lucky — the third party auditing firm who was paid to review the code.
That you can review the code doesn’t mean that anyone does so. At least not in an exhaustive and relevant way.
Closed source or open, the problem is made even worse now that we live in a Package Manager culture where even the simplest applications adopt dozens of dependencies.
I’m not saying that you should trust Facebook and their closed source applications, just that you’re not really all that safer trusting anyone else just because their source code is available.
I'm not sure I'd surmise the dependency of users to be an entire culture in and of itself. Plus, I feel like this splits hairs; going down the rabbit hole of "well who is checking the open source code" and "well who is checking the person checking the open source code" leads to endless complexity, especially when the move in question is more symbolic than substantive.
If WhatsApp did not use e2e encryption by default (and they didn't), then there was possibility of governments reading the communications anyway. Does this new announcement really lessen the security and privacy of the users? To me, it sounds like they are making the policy clearer to the public, since US / UK governments have not explicitly made press releases telling citizens which of their communications will be monitored. While I am very much a proponent to end-to-end encryption for ALL communications, I think this move isn't going to sacrifice privacy that users previously had.
Your assumption how backdoors work is very limited/wrong, I am afraid.
You assume that you actually understand the code well enough to identify the backdoor - e.g. as some sort of function that will bypass authentication when some secret hardwired password is provided (to give a dumb example).
However, to give a real world example of backdoored crypto, it is nothing of the sort. For example, the issue with the potentially backdoored Dual_EC_DRBG pseudorandom number generator has been known since 2004 at least - but the algorithm has been standardized by ISO/NIST and used for years until the potential backdoor issue was widely publicized following the Snowden leaks and the standard was withdrawn.
Good luck finding something like that only by reading code unless you are expert in crypto and mathematics. If you were only auditing code whether or not it matches the published, supposedly correct, standard (or algorithm description), you would never find this. The backdoored code was working completely fine, exactly as intended. But the weak random number generator allowed an adversary with sufficient computing resources to break the encryption.
Yes, it is theoretically possible to create backdoors that are hard or impossible to detect, if you start 20 years ago and subvert the standards used by the entire industry.
An improvement in security and privacy isn't limited to "make it impossible, even theoretically, for anything bad to ever happen OR you've accomplished nothing". Most back doors aren't inserted by competent NSA-level actors 20 years in advance. Most are "whenever a message passes through, send a copy to this third party". They are inserted by court order for a specific case due to the government becoming interested late in the game due to a specific case. For example, when terrorists start using some secure email service, the government tries to force the service to allow them to snoop on the relevant conversations. Open sourcing the code would allow you (with the help of the community) to detect these sorts of attempts when the product involves end-to-end encryption.
So while having the source and a community auditing changes to that source doesn't prevent every possible attack against your privacy, it prevents almost every one that is plausibly detectable, which is literally as good as you can do.
Would you trust you can find any innocent 'bug' that could lead to a privilege escalation? Decades of vulnerabilities in software, open source or closed, would contradict that believe?
Check out Nix. Deterministic source derivations of pretty much anything you might want to build, trivially re-buildable from source by anyone. It takes seconds to install the "Nix Shell" on pretty much any of the modern OSes.
Now, to avoid the "Reflections On Trusting Trust" exploit, building the C compiler toolchain from known-good "root" compiler/linker toolchains, and then comparing the output vs. self-compilation is quite a bit harder.
Define "ordinary person", as plenty of people here have. However, there's very little difference between downloading a reproducible system that compiles everything on your machine and downloading a binary with a known checksum from a perspective of trust.
I used to use Gentoo, and I built my entire OS from source. I'm not extraordinary in any way, I'm just an ordinary person who has a deep interest in software and computers.
I've done two "stage one" builds of Gentoo. I'm not super skilled but, I had a lot of time and reference material. My bet is that folks could but would not want to. There is significant time cost.
Also, I'm still using one of those original builds on my laptop - upgraded of course...still mad love for my daily driver.
It's a pretty automated process. I'd estimate 1/10 of all people who can use a computer and install software at all could do it if they wanted to and had sufficient time.
If you have multiple compilers and they all aren't infected in exactly the same way (e.g. one is not infected, or they have different types of infections) then you can detect there's a problem with them.
Just because it's open source doesn't make it trustworthy or bug free. Are you going to audit tends of thousands of lines of code to find an obscure vulnerability that a state actor has gotten added in a way that's not obvious?
As soon as you’re building something written in, say, Javascript, then any semblance of assurance goes out of the window.
JS is an easy target. What about C or C++? You could audit the code but have you also audited your compiler? What if you used Visual Studio?
Code is an easy target. Can you trust the auditors themselves and in the absence of that, your own ability to detect a vulnerability?
The only bulletproof solution is to not use software.
That said, most of us are not as important or recognisable as we believe we are. The layperson won’t have a good reason to isolate their computer and install an airlock between their aluminium-lined office and the rest of their house.
This is just an updated version of the story about the US government scanning your emails for keywords after 9/11. They don’t even need to actually do it, they just need to say they do and most people will monitor their communications.
> If I can compile audited code from source myself, without any backdoors, then I can be reasonably assured there aren't any backdoors
Many people can compile code from source. Not so many people have the ability to audit code for obscured backdoors. The number of people that are capable to audit and have the time that is necessary to do it is practically nil.
How are you going to get that source code onto your phone, when loading unauthorized code becomes illegal? Who are you going to talk to, when everyone else is using the approved apps? Try to think ahead a little. This arrogant naive thinking is how geeks lose and politicians get it wrong while you're not paying attention and then everyone ends up living with the consequences.
You're neglecting how a group of ignorant lawmakers could simply pass a law that forces secure-boot so your computer can only run proprietary OSes. Then they can force OS makers to only allow app store installs.
That assumes you can get the code. If it's illegal to distribute non-backdoored software, the code might be hard to get, for example Github might be forced to take it down.
Yes of course. The old argument that Linux is free of backdoors because it is open source. It's such a ridiculous claim. Software systems on that scale are so complex, there is NO way whatsoever to make sure there isn't a backdoor in there. I would go as far as saying that OpenSource software by definition is more vulnerable to backdoors than closed source software, exactly because the source code is available and anyone with some credibility can make patches. It is the dreamland scape of the NSA.
Hacking into closed source code is much more dangerous (politically) for them and also more difficult.
The thing with closed-source is that the binaries can be backdoored by the vendor on behalf of the NSA. This is not a new practice. With open-source code this venue is less likely to work and needs an additional layer of deception. Sure the NSA may manage to plant vulnerabilities, but flaws will not be persistent in the same way as when they're planted in cooperation with a vendor.
There's currently a push for reproducible builds which further hardens distros against such attacks.
The Whatsapp binary is sufficiently transparent to enable someone determined to write their own client. Thats enough info for an expert to verify their "messages are end to end encrypted and we don't know the key" claim.
The fly in the ointment is the client might have additional functionality to leak the e2e encryption key. That is far harder to find, but if it's use were widespread, it would be found by researchers.
The whole point is moot though - whatsapp is designed to (by default) upload cleartext chat logs to google/apple servers. Since all chats have 2+ recipients, the conversation is only safe from snooping if nobody in the chat has backup enabled, which is unlikely.
Yep, the chat log backup basically renders WhatsApp completely insecure. They also constantly nag you inside the app to enable it. This is how they caught Michael Cohen (and presumably others). Unfortunately Signal does it, too.
Yes, you're given a random password when you enable backups that you need to use when restoring. They don't get uploaded to the cloud. On iOS there's no backup feature at all AFAIK.
At least on iOS - whatsapp doesn't seem to nag about backups (i don't think i've ever enabled it), but you can do a local backup/restore if you replace handset. Signal encrypts your chats locally (but not binary blobs/video/images) but they cannot be backed up and restored locally.
Attack or cooperation with custom soft keyboard makers is much easier. It may only give you one side of the conversation but no ones seems to care if those are secure. As long as they do a good job of spell checking.
Having a 'secure' conversation definitely implies trust in Google/Apple, regardless of the Whatsapp binary behaving flawlessly (e2e encryption, solid PRNG, no logs etc.)
They could indeed serve you a different binary from the app-store. "Do you think that's Whatsapp you're using?"
Trust in the OS made by Google/Apple is slightly different from trust in the ability of Google/Apple to keep your data stored on their servers secure against nation states.
I have much more trust in the former than the latter.
Unless you have reproducible builds, having access to the source code can be deceiving.
Audits should start with the actual binary. Only if you can ensure that the binary was build from a specific source code and does not contain any other logic in there (i.e. it was build by a trusted compiler), you can happily skip the decompilation process and analyze that source instead.
> If the source code isn't available for audit by 3rd parties (or yourself), and you can't build it from source, then it was never really "secure" anyway. What lawmakers do or don't say is just noise.
It's still nice to see this spelled out since I've seen many people claim that WhatsApp being closed source is not that problematic. This is definitive confirmation that it cannot be trusted anymore and it's time to start working on the problem (both from a legislative point and by seeking out and moving to technological alternatives).
If the source code is available for audit by 3rd parties, but nobody you trust have actually audited it (depending on your paranoia, this may be only yourself), then it was never really "secure" anyway.
Trust isn't binary. You can trust a company to keep your conversation private for most purposes without it being safe from a government wiretap. We do that whenever we use a phone.
Back in the day, it was illegal to export "good" encryption. There was nothing stopping it from happening technically, just like there is nothing stopping you from stealing from a convince store, except for the threat of enforcement.
But the threat of enforcement can have a strong chilling effect.
Bad analogy - exporting "good" encryption was illegal and while for individuals that was basically irrelevant, _companies_ would absolute follow that law. The analogy is not between "you stealing from a convenience store", it's "you running a company that has a known practice of robbing convenience stores". It's so incredibly illegal, you're not going to. There is no realistic hypothetical in which that decision would even remotely make sense.
FB's history suggests they have a culture of not blowing the whistle when shady things are going on.
I suspect most people want to keep their $500k+/year jobs instead of sticking their necks out. My friends who work at FAANG are largely mentally checked out, and just do it to collect their monies and retire ASAP. You can't pay rent with good feelings.
I quit my (good paying) job to burn through my savings and start my own company because I want to fix this, so it's not everyone who behaves that way :)
Look into B Corporations and read the last third of the book Lab Rats. You can make your company serve society and it's employees right without adopting shitty cultures that breeds suicides, pissing in bottles and burn outs and still make huge profits.
Fortunately for normative social theory and political economy, a lot of research has happened since Adam Smith evengalized self-interest and the free market.
The free market never has, nor never will exist. The kind of unfair distribution of scarce items to people with the most money, the creating of a competitive rather than cooperative substrate for social interaction. These things tear society apart and have always needed to be moderated whenever a market is present.
> Which part of Adam Smith are you paraphrasing exactly?
No one quote in particular; just a recurring theme.
"It is not from the benevolence of the butcher, the brewer, or the baker, that we expect our dinner, but from their regard to their own interest"
- The Wealth of Nations
"The natural effort of every individual to better his own condition...is so powerful, that it is alone, and without any assistance, not only capable of carrying on the society to wealth and prosperity, but of surmounting a hundred impertinent obstructions with which the folly of human laws too often encumbers its operations."
- The Wealth of Nations
"The statesman who should attempt to direct private people in what manner they ought to employ their capitals, would...assume an authority which could safely be trusted, not only to no single person, but to no council or senate whatever"
- The Wealth of Nations
"In spite of their natural selfishness and rapacity [capitalists] are led by an invisible hand...and thus without intending it, without knowing it, advance the interest of society"
"The man of the most perfect virtue, the man whom we naturally love and revere the most, is he who joins, to the most perfect command of his own original and selfish feelings, the most exquisite sensibility both to the original and sympathetic feelings of others."
Self-interest, yes, but in a system regulated just as much by the church and the aristocracy as by the flow of capital. Pure monetary greed without regard for doing what is right is nothing Smith would ever have advocated.
That's just passing the buck: instead of relying on the goodwill of government employees, you're now relying on the goodwill of Facebook employees. That's still "relying on goodwill" levels of security.
I agree with this, but I think the main problem is the centralization of trust, or the user having to place trust in one or two entities.
Imagine if 10 or 20 different organizations all had access to the source code and could vouch for the checskum of a each build.
While it would be nice if we could trust FB, Apple, etc., it would be much better if we didn't have to, and could simply trust others who have less to lose from alienating government officials.
> They could easily include a keylogger in WhatsApp and bypass the e2e encryption, for example, and us regular folk have no way of knowing.
This would be quickly detected by anyone looking at the data the WhatsApp app was sending back to the server (this isn't hard to do on a jailbroken device).
Yes, this is something that literally everyone who reads HN will oppose. Meanwhile do you hear the deafening silence from the average Joe who thinks he has "nothing to hide"?
Don't hate the politicians who keep pushing this. They're just trying not to get fired. And the surest way to get fired in a western country right now is to be seen doing nothing about the terrorism problem and then having terrorist acts committed under your watch. So the politician asks the security forces "what can we do to stop terrorism?" Security says "get us access to messages of terrorism suspects". Seems reasonable, let's go ahead with it.
Yes, we know that it doesn't stop with terrorism suspects. Then law enforcement wants to read the messages of drug kingpins, then drug suspects, then shoplifters, then jaywalkers, then everyone, just to be on the safe side.
But as long as people are told that they need to give up some privacy in exchange for security, they'll take the latter every time.
But by your very comment, it only makes sense to hate the politicians! If all they care about is that they keep their jobs, then isn't trying to get the politicians fired for threatening our privacy the best way to get them to care about our privacy?
Don’t hate the politician for trying to keep their job. In that respect they’re no different from any other person.
Yes, if you can convince politicians that you’re more concerned about the actual damage to your privacy than the hypothetical increase in the probability of a terrorist attack, then go ahead. Do it. Convince them. But my hunch is that most people are far more terrified of dying in a terrorist attack.
From my experience, people in my country rarely think of terrorists. Perhaps these same politicians have manipulated the people into obsessing about this?
It is not the question if you have something to hide or not but once they have unlimited access like this, what is stopping them from manufacturing truth and bend the information to support their biases. Privacy IMO is more important.
If I don't have access to your conversation and people know this, but I claim that I know you said something in such a conversation, people can ask how I know it and demand proof.
If your conversations aren't encrypted and I have access to them, and people know this, then if I claim that you said a certain something in such a conversation, people are more likely to just believe me.
Also, unencrypted data is data that can be altered by a third party.
You are absolutely right. I also already assume the government can access virtually anything I am doing anyways. I think we are all naive to believe otherwise.
My questions to people who say they have nothing to hide:
- what are the things you and your significant other are doing in the private of your house?
- which co-worker do fantasize about, be specific in what you like to do.
- As teenager what legal/illegal drugs did you consume. How often did you pass out, who where the people you consumed these drugs with?
- What are your political leanings?
- How much do earn, where are your savings invested in.
- Which illnesses do you currently have, which exist in your social circle.
- what private information was shared with you by other parties. Are you aware of any wrong doings/illegal activity by other people? Be specific esp. friends and family.
Be aware that I will share this information with anyone in your social group/work/volunteering work whenever I feel like it. I may also share this data with extremists groups on the other side of the spectrum, if helpful. Lastly I can use this data to fabricate "helpful" information about you if necessary. Thank you.
> Be aware that I will share this information with anyone in your social group/work/volunteering work whenever I feel like it. I may also share this data with extremists groups on the other side of the spectrum, if helpful. Lastly I can use this data to fabricate "helpful" information about you if necessary
That's not your decision to make, as to whether you have something to hide or not. That will be decided by someone else. It's entirely out of your hands and - depending on where and when you are subject to such scrutiny - may be arbitrarily decided, which is in fact a requirement in all authoritarian systems (the law has to be heavily subjective and arbitrarily enforced so it can be directed against anyone at anytime as so required, such that the population is kept in constant fear).
> Social media platforms based in the U.S. including Facebook and WhatsApp will be forced to share users’ encrypted messages with British police under a new treaty between the two countries, according to a person familiar with the matter.
This sounds as if the platforms are already sharing with the US authorities. So this being about them sharing it now with UK authorities.
If people in a position of power can break those laws with impunity then new laws aren't going to change that. The problem is holding the lawbreakers accountable.
It's important but you need to consider the situation where the government is corrupt. The Us is about to impeach the President; and the two factions in government are each accusing the other of corruption, albeit using wildly different criteria.
My point is that laws are only as good as the integrity of the government enforcing them, and can themselves be written in bad faith. The solution is greater participation by the public rather than only technical fixes to the legal code.
If I'm reading that right, they are getting the encrypted messages, not the decrypted ones. Either whoever wrote the article doesn't know the different or the uk police have been fucked over by the us...
It's been true for a long time that since intelligence agencies aren't supposed to spy on their own citizens, they agree to spy on each other's citizens instead and subsequently share the data.
"Encrypted" could mean that's how they'll be delivered to the UK.
Or it could be a definition of the set of which messages are in question. The set is defined as the ones that users expected were supposed to be protected by encryption (or that government could not access because of encryption).
In other words, from the phrasing alone, it's not clear whether "encrypted" describes the state of the messages or the scope of the sharing.
Signal may or may not be ok at the moment, but not in a couple of years (it has all the power to silently push an update with a backdoor). Among all the popular messaging apps only Telegram is in a position to not cooperate with five eyes.
I'm not sure if I completely agree with the silently part. Doesn't Signal have reproducible builds? In theory, any binary they release can be checked to see if it corresponds to source code.
Whether anyone is actually doing that is another question. And there is no technical reason app stores couldn't send a special backdoor-ed build to select list of users under surveillance if government forced them to. (They can target sets of users for staged roll-outs, beta programs, etc.) Which defeats the notion that one person watching can detect it for everybody.
On the other hand, it's possible to do stuff with smart phones at the platform level. Whether through vulnerabilities or some platform capability (updates, etc.), it may be possible to have a backdoor-ed binary that looks to the user like it is the regular binary. That's not a capability that Signal has, but it is a capability that might very well exist.
Regardless of all that, for users who have auto-updates enabled (most users), even if Signal can't silently push a backdoor-ed update, they can unilaterally push one. You could wake up tomorrow with a different version that has a backdoor, so even if you can identify backdoor-ed binaries, you have to turn off updates or verify the binary every time you open the app.
Assuming for a moment that we can trust our smart phones at the platform level not to lie to us, the problem of targeted backdoored builds could be mitigated somewhat if the platform implemented Binary Transparency in a clever way:
When you install an app (whether through an app store, or side-loading) the app should state the location online of an append-only log that lists all the releases (with timestamps) for that app. The phone OS could periodically check to see if an upgrade is available, and security researchers could check that the log doesn't contain references to versions which aren't available to the public.
Ideally there should perhaps also be a way for users to anonymously report which version of any app they are using, so that people with particular security concerns could configure their OS to only update an app after, for example, 50% of users have already installed the update.
Whenever we talk about web encryption, people make the (valid) point that needing to trust the server to deliver the encryption mechanism greatly reduces the benefits of clientside encryption.
I agree with that analysis, but it's not clear to me why we don't have similar levels of skepticism about auto-updating desktop apps. Signal in particular uses a third-party software repository, so if it wanted to push a malicious update, it wouldn't even need to sneak it past package maintainers.
Package signing protects you against developers with bad personal security practices, because it makes it harder for a third-party to MITM their apps. But it doesn't do anything I can see to protect you from a developer that turns malicious in the future.
Sure, but the Signal source code is open source. I've been compiling it myself for years on Android (and signing the binary with my own key). They can't silently push a binary from the Play Store and overwrite my binary.
They still can for everyone else, probably everyone you talk to over Signal, so not much of a protection. Pretty much the only way to avoid that is for Signal to give up that control, don't supply Signal directly and let others build Signal from sources and publish in independent from Signal package repositories, i.e. f-droid, OS package managers on Linux, etc. Even if they don't federate, it will at least make their end-to-end encryption sound, not useless like it is now when everyone uses Signal supplied binaries.
Unfortunately, unless things have changed since last time I tried telegram, it doesn't encrypt chats by default, which is a huge problem for any movement. OpSec is really hard, even for those who know what to do. (Operators of Silk Road 1&2 were both ha taken down by a failure in OpSec somewhere.) Security researchers have cast aspersions on MTProto the protocol used. The truly paranoid I know, for whom Signal and Telgram don't go far enough, use Wickr, which chooses to more make sacrifices in usability in favor of security, but they are few and far between.
> Among all the popular messaging apps only Telegram is in a position to not cooperate with five eyes.
Telegram is subject to US coercion as much as any US company: both major app stores are US-based, and without app store distribution, a product might as well be dead as far as the masses are concerned.
Historical messages, yes. However an app maker can be coerced into adding a hidden user that can participate in chats and receive all future messages decrypted. It doesn't require any special crypto hacks for that to work.
I didn't read that - are you sure about this? I am reading that backdoors are being asked to be created on WhatsApp. Anyway the messages aren't stored.
From the article:
Priti Patel, the U.K.’s home secretary, has previously warned that Facebook’s plan to enable users to send end-to-end encrypted messages would benefit criminals, and called on social media firms to develop “back doors” to give intelligence agencies access to their messaging platforms.
That's the UK's position but it's not clear from the article that some kind of forced backdoor made it into the treaty, just that WhatsApp will be forced to share users’ encrypted messages. But they have already been sharing encrypted messages through other legal means.
Why comment if you didn't read the article? It's in the first paragraph: "Social media platforms based in the U.S. including Facebook and WhatsApp will be forced to share users’ encrypted messages with British police under a new treaty"
And it's misleading -- the article and the article's headline say nothing about adding a backdoor. There's not enough detail in the article to say exactly how this decision will affect WhatsApp.
(The headline on Bloomberg, "Facebook, WhatsApp Will Have to Share Messages With U.K. Police" is more restrained.)
On re-reading the sort article, I realize it only says they will be compelled to share "encrypted messages" and "information to support investigations". It never says anything about decrypting the messages. If that is literally all it is, then it is quite misleading.
What use would ciphertext be to GCHQ? I’m fairly certain that “encrypted messages” describes which messages they will have to provide in plaintext, not the format they will come in.
Your interpretation is likely right. But it's interesting to consider the other. Maybe they can get metadata from the ciphertext, such as size. Or maybe they are just interested in other metadata such as time. Or maybe they have some cryptographic attacks, maybe involving extracting a shared key from the peer's phone, or key injection via sim spoofing.
And worrying that a large number of readers interpretation of the article was influenced by a headline. We really have to do better and be vigilantly aware of how media and power influences public opinion, especially during times such as these.
>WhatsApp has no ability to see the content of messages or listen to calls on WhatsApp. That’s because the encryption and decryption of messages sent on WhatsApp occurs entirely on your device. Before a message ever leaves your device, it's secured with a cryptographic lock, and only the recipient has the keys.[...]
>A search warrant issued under the procedures described in the Federal Rules of Criminal Procedure or equivalent state warrant procedures upon a showing of probable cause is required to compel the disclosure of the stored contents of any account, which may include "about" information, profile photos, group information and address book, if available. In the ordinary course of providing our service, WhatsApp does not store messages once they are delivered or transaction logs of such delivered messages, and undelivered messages are deleted from our servers after 30 days. WhatsApp offers end-to-end encryption for our services, which is always activated
Very interested to see what their response is and if their promise holds that they do not have technical access to content but merely to account information.
I’m much more interested in the wording of “in the ordinary course,” which I don’t think is just a quirk of the language and I believe it is an indication that certain differing measures can take place after being compelled to provide information.
At minimum, it seems the kind of thing a competent lawyer would require.
Even if FB / WhatsApp doesn't currently have such a capability, there's no legal guarantee they couldn't be compelled to add it, thereby putting them in conflict with language that didn't include that clause.
Afaik, case law is murky over whether creating new code qualifies as speech (illegal to compel) or facilitating legally-approved wiretapping (legal and required).
Probably not deliberately, but something can always go wrong, which is probably why they added an additional clause that messages older than 30 days should be deleted.
More interesting is what happens if someone gets a new phone. In that case (if I understood correctly) they might ask for the sender to resend the messages with a different key. If they really are forced to add a backdoor then that's where I would fit in a MITM attack, as it is limited in scale and detectable when used excessively.
Whatsapp is under the obligation to comply with lawful intercept regulation wherever they operate, just the same as any other communications platform out there. and no, Whatsapp is not some 'small indie company that just happens to not have been noticed'.
Pretending and using obfuscated language to lead the reader to believe otherwise is disingenuous.
The idea that moves like this will "keep us safe" is utterly preposterous; there are a multitude of other ways in which terrorists (or the boogeyman de jour) could communicate - are the UK and US governments going to insist on backdooring IRC, Slack and face-to-face conversations? Are they going to outlaw encryption libraries?
I truely fear for the future that western governments, in particular the 5 eyes members, are hell-bent on creating. They denounce China and Russia for their human rights records in one breath, and seek to strip us of privacy and personal rights in the next - the hypocrisy is simply staggering.
What's perhaps even more frightening is that so many people believe that moves like this are to keep us safe, will keep us safe.
The keeping safe argument from government is indeed preposterous. As if that was their mission to keep us safe. Why are they allowing our nature to be destroyed in favor of money/economy? This years heatwaves killed many thousands of people in Europe only, some estimates are in the tens of thousands. This is real deaths in 1 year, not because of terrorist attacks, no fucking backdoor will stop this. And what are they doing? Measures that won't make any difference. The cars will keep driving, the factories continue to produce poisons. Pesticides will still be allowed for better economy at the cost of entire insect species being wiped from this planet in just a few decades. The list goes on and on. It is so sad that I don't really want to listen to politicians anymore. They are the only ones who can change things by law, not me. And what law are they coming with for my safety? A backdoor for Whatsapp and Facebook.. We better ignore all this shit and try to enjoy our little lives for as long it will last.
I can not find any source that many thousands if not tens of thousands died in Europe in 2019 from the heat wave. Can you point me to your information source? I believe you are confusing the 2003 heatwave from over a decade ago with the one in 2019.
The 2019 heat was directly implicated in the deaths of at least 15 people. Five died in France, four in Germany, three in the United Kingdom, two in Spain, and one in Italy. Nine of these were drownings, attributed to people cooling down, and another involved an exhausted farm worker who went unconscious after diving into a pool. The three who died in hot air were aged 72, 80 and 93. Approximately 321 million people were otherwise affected by similar temperatures in the same countries.[1]
The difficulty of attributing deaths to heatwaves is something that comes up in books like freakonomics if I am not mistaken.. At any rate, from that wikipedia entry:
> Netherlands reported 400 excess deaths in the week of the heat wave, a figure comparable to those recorded during the 2006 European heat wave.
Netherlands is considerably smaller than the other nations.. Most put these spikes as a portion* that would have died in the next days weeks, or months lowering statistics later in the year and a portion that would not have.
This becomes more true the further a country moves away from democracy: in a vibrant popular democracy, people will honestly work for their nation's good and corruption is viewed as evil.
Further away from democracy, leaders are purely self serving, successful corruption is seen as a sign of intelligence and whistle-blowers, far from being heroes for pointing out maleficence, are threatened with execution.
And indeed, institutions eventually shift to working solely for the benefit of themselves and the people in charge. Mostly because any that don't are undermined until they do.
>They are the only ones who can change things by law, not me.
I had this thought yesterday as I read about several prominent politicians in Canada, including the prime minister, actively participating in the climate 'protests' that occurred yesterday. Who were they out there protesting? Themselves? They're the ones with the power to change things. Why were they outside with signs instead of in their offices doing something about it?
Not all politicians have equal power.
In the US, the nominations of important positions, the laws and policies are determined by 20 Senators in GOP.
Those 20 senators (from Alaska, Missouri, Arkansas) are answerable to the demands of their constituents. Those people are not asking for Climate Change Policies. It has nothing to do with money, but the values and culture of the constituents.
It is incredible how HN and majority of the supposedly 'smart' crowd completely fail to understand the dynamics of how policies and laws are passed or made in this country or anywhere else.
It is also incredibly stupid to paint all politicians with the same brush, because if you are an immoral, evil politician you'd exactly want that situation. "All politicians are the same", "All media is the same" is the foundational strategy of bad actors.
It has nothing to do with money? How are the values and culture of those constituents determined, if not through vast sums of money spent on propoganda? Protecting the earth - not shitting where you eat - is an inherently sensical idea. The only way that people can align themselves against such an idea is when they are manipulated to believe it is part of a broader conspiracy to ruin their lives. It takes a lot of money.
Also, so what if the bad actors want us to believe that all politicians are the same? What if it were true that they're all the same and evil? Would it still be "incredibly stupid" to accurately assess the state of affairs?
Right? I mean why can't politicians unilaterally change things without convincing the electorate and just fix climate change, the way Kathleen Wynne so successfully managed to in Ontario?
Well we know these actions look good from a public relations point of view. We only need ask if they had any other motivations or if it was only for PR.
> The keeping safe argument from government is indeed preposterous. As if that was their mission to keep us safe. Why are they allowing our nature to be destroyed in favor of money/economy?
I don't think it is quite as simple as this (I'll preface this with saying I don't think we should have backdoors and that I wish we had STRONG encryption everywhere). I think the problem is that different departments have different goals. It is very clear that the CIA and NSA's jobs would be easier if there was a magical tool that let them backdoor in and no one else. The police and FBI would have an easier time doing their job if encryption wasn't a thing. That's definitely true! The issue is who is watching the watchmen? That's why we need checks and balances (specifically by people that understand the tech). These departments are so focused on their goals that they lose track of the fact that introducing backdoors actually creates more work for them (and thus actually makes their lives harder). But as humans we're always focused more on the task at hand and less on the over arching tasks (we're notoriously bad at dealing with large scale multifaceted problems). It all really comes down to these departments thinking "if we had this tool it would be possible that we could have stopped this" (which possible is the key word, because we've seen that they can't. There's just too much data. You're just adding more hay to the haystack). The failure really is at the checks and balances stage, that those watching the watchmen don't understand the motivations nor the consequences, and thus let them do as they please. Agencies running the checks and balances are supposed to be suspicious and critical, not friendly. But these agencies aren't getting the funding nor can they attract those that are tech literate, so there's a feedback loop that is only getting stronger. What I'm trying to say is that there's this long chain and things are broken at many stages and that if a single stage was fixed there would be significant improvement. Basically solving at any single stage will help stop the feedback loop.
tldr: The intelligence agencies should be smart enough that they would know that backdoors will backfire on them. But they clearly aren't. There's also a huge failure at the checks and balances stage where these agencies are getting approval which creates a feedback loop and without solving this the problem will continue to grow.
Thanks for this nuance answer. Unfortunately, "Nuance" will always lose to "Pitchfork", even in supposedly smart and intelligent communities like HN.
There are ZERO people from the pitchfork community who understands the pressure of working in keeping a community, region or country safe. If there is a terrorist attack, the pitchfork people have to answer to ZERO questions, while CIA/NSA/Police will have to answer 'Why didn't you do something".
It is so easy to sit in their comfortable offices and homes and philosophize about privacy when you have no skin in the game.
I mostly agree, but I wouldn't call HN a pitchfork community. It can definitely get that way at times, but I think nuance is welcomed and generally encouraged here. Definitely the only way to keep it that way is to keep promoting it, so don't get disheartened. There's still hope.
> It is so easy to sit in their comfortable offices and homes and philosophize about privacy when you have no skin in the game.
And I definitely agree with this. But that's also why I made a big point into that lack of encryption actually gives these agencies more work (if we look at history). The problem is exactly what you note though, there will always be failure and we ask why they can't stop near impossible things to stop. Post hoc analysis is always easier than in situ.
No, not by their actions of course. It happens because of the lack of appropriate actions by them. Governments should anticipate on future events pointed out by science. And that is what they did not. All I see is business as usual.
Maybe the definition of politician should change to:
“One who is elected by the common populace to facilitate business at the cost of logical reasoning, human rights and the natural world.”?
While I’m sure one or two exist, I can’t actually think of a politician in the US, UK or Australia who doesn’t fit into the definition somehow. Again, there would be a few good ones, just not enough.
Still remember how one German Islamist terror group just used their web-email provider's draft feature. They never 'sent' anything. There are often quite simple ways to circumvent this kind of thing.
Good, but your e-mail provider can still see it. And be forced to eavesdropping.
Back in '90s I used a nym e-mail to receive e-mails anonymously and with no traces.
In a few words - e-mails are encrypted with your PGP key and posted to Usenet groups, where you scan all messages and extract only those signed&encrypted with your key.
Yep, there are 1000 and 1 method of communicating securely. Governments are just using this as an excuse to wiretap popular messaging services for general surveillance.
Unless they'll get away with making everyone dumber, they shall fail.
My Twitter account has been suspended for suspicious activity. Someone with whom I disagreed probably tried logging into my account a few times that triggered it, and now I cannot regain access to my account unless I provide my phone number. When you sign up, phone number is optional, but when someone fucks around with you, it becomes mandatory. The fact that the system is designed this way is absurd. This is not limited to Twitter, by the way.
Yeah only a stupid person can think that backdooring whatsapp will actually prevent the next 9/11. And that's in my opinion the core issue with most politicians, stupidity/tech illiteracy.
I'd love to hear about either a possible alternative government structure in which there are no politicians or a way to attract the smartest people in governments.
When you consider the societal fallout and everything that has transpired since, the most insane part to me is that by its very occurrence, 9/11 itself already prevented the next 9/11. The "next 9/11" was to crash the fourth hijacked plane into a high value traget; the plane on which the passengers fought back which was crashed in Pennsylvania WAS the next 9/11. This was a tactic that was apprehended and adapted to before the day was out. It worked three times on one day, once. An update to the mental calculus of common folks was all it really took. If we had successfully prevented it re-shaping our society, it would have never, ever, ever worked again. This newfound understanding of the rules coupled with a straight-forward countermeasure like reinforced cockpit doors would have closed off that vector of attack entirely.
19 malevolent people acting in 2001 have colored nearly two decades of policy for America. I remember two particular circulating ideas from the time: "they're attacking our way of life" and "they hate us because we are free." The latter was much more divisive and so people spent much more time arguing with each other about it. Meanwhile, whether intentionally on the part of the attackers or not, the first was very effectively accomplished.
Maybe if there had been no 9/11, the agencies charged with protecting Americans would have still been seduced by the ease with which modern technology enables broad surveillance. Maybe hoovering up all the data is too good an opportunity to pass up. Regardless, I yearn for, and still miss the end of history. Our hubris has been rewarded with interesting times.
On the politicians side I agree, tech illiteracy seems to be largely correct. As far as the intelligence agencies are concerned, well I think they know exactly what they are doing. So they know that a WhatsApp backdoor doesn't help against the next 911, it still allows a lot of general surveillance so. And that is what the agencies are after. Terrorists are just an excuse IMHO.
Politicians are not at all stupid in general. The problem we have is in their selective listening after we elect them.
If a legislator is not technically up to speed, considerable tax payer money goes towards hiring people in government to do the research and the explaining. Some high level advisers may come from organizations with a private agenda and after a few years of working within the goverment these experts pop right back to their industry jobs and we don't hear of them anymore.
Ultimately its the same need in whatever form of government we want – we need people we can trust.
This is why people have advocated for bottom up governance, where local groups make decisions and select rotating representatives to take those decisions as made by the group to larger regional councils, etc. In this way no individual has any real power. This is called democratic confederalism and is in progress in Rojava now but could be done in the US.
https://www.youtube.com/watch?v=LcndZ0nZ0mo
Along with the vote in process there should be a vote out process. The public should be able to force a vote of their representative at any time during their term and equally vote in their replacement.
The intention being that politicians are aware that they must be consistent with their words and actions throughout their term otherwise they will lose their seat.
Some months before the last big bail out legislations, in my state we had a US Senate candidate who was new in our political scene, appeared as a local man with a law degree from our state university, and he spoke about how the working people struggled with unemployment, delinquencies etc, as he knew middle class issues and can change the ways of Washington. PBS featured him and I still remember the interview they aired from his kitchen in a middle class home. I am among the people who voted him in.
A few years after this election I checked his voting records by chance and I realized that he had voted almost always in favor of the bailout system. I wouldn’t have known this by just reading the news or watching cable. In the next election cycle he won handily, this time supported by organizations with cash to blanket our news with favorable lines for him. That’s how life works I guess.
Everyone bitched at me for not voting the last election. To every person who asked me, I asked who they voted for our states railroad commissioner. They all said “oh I don’t know I just voted all blue”. I asked them if they knew what our railroad commissioner did, they did not know. So I proceeded to enlighten them on how the RR commissioner controls everything around our state’s oil fund and that they all just voted for the equivalent of donald trump to manage our oil rich state.
Of course he did not win because the other candidate was much better qualified and rural voters knew him en masse (he’s the one that signs checks for all the citizen stewards of oil fields), but I found it hilarious that people would flame me for not voting for a figurehead (president) but be a-ok to vote in some no name hack to manage our schools because of big money advertising in elections
> I'd love to hear about either a possible alternative government structure in which there are no politicians or a way to attract the smartest people in governments.
In all seriousness this is the aim of all historical anarchist movements. Despite the propagandization of the term "anarchism", that philosophy has a long history of attempts and writers and thinkers, and it has more often than not "failed" when a powerful state entity violently disbanded the efforts or killed prominent leaders. In other cases anarchism has not failed at all but has existed in tribal communities in different ways long before european thinkers wrote on the subject.
It's the same tactic that former CIA Director David Petraeus used to send messages to his mistress. It's been around for a while, so investigators look for it.
Or the paris attacks that were orchestrated on the playstation network and via unencrypted sms.. by ppl that were on a watch list already.. The problem is not encryption here
The idea that using a webmail provider's draft feature provides security might have been true a long time ago--I don't know--but it's really stupid to think it does so nowadays.
The current situation looks like it still has a whole ton of potential legal tripping points, from Wikipedia:
As of 2009, non-military cryptography exports from the U.S. are controlled by the Department of Commerce's Bureau of Industry and Security. Some restrictions still exist, even for mass market products, particularly with regard to export to "rogue states" and terrorist organizations.
Militarized encryption equipment, TEMPEST-approved electronics, custom cryptographic software, and even cryptographic consulting services still require an export license.
Furthermore, encryption registration with the BIS is required for the export of "mass market encryption commodities, software and components with encryption exceeding 64 bits" (75 FR 36494). In addition, other items require a one-time review by, or notification to, BIS prior to export to most countries. For instance, the BIS must be notified before open-source cryptographic software is made publicly available on the Internet, though no review is required. Export regulations have been relaxed from pre-1996 standards, but are still complex. Other countries, notably those participating in the Wassenaar Arrangement, have similar restrictions.
The fight for power has this unavoidable conclusion. The saying that “knowledge is power” is more true than I think most people realize. He who knows the most holds all the power. It doesn’t matter who is in power or what their beliefs are - they would eventually have to resort to these tactics or risk losing their foothold. Those who don’t or can’t afford to will be overrun.
We dislike it even though we would be forced to do the same thing in their position. We dislike it because we aren’t the ones in power. Again, as it goes in the jungle, life isn’t always fair.
> The idea that moves like this will "keep us safe" is utterly preposterous
The "terrorism" rationalization provided for this surveillance has nothing to do with reality. The United States government is not in any way involved in fighting the source of jihadist terrorism, and the state responsible for 9/11, Saudi Arabia. They are allies, and Trump's cabinet is now attempting to arm Saudi Arabia with nuclear weapons. The actual intent of this law is to surveil whistle-blowers and journalists. WhatsApp messages have already been used in the case against IRS whistle-blower John Fry, who exposed the Donald Trump-Michael Cohen hush payments.
It is reasonable for governments to want to be able to wiretap communications. They can wiretap telephones, including mobile/cell phones, for example as this is built into those systems. There is no problem with that and it is actually welcome.
Those apps that provide end-to-end encryption are a problem and they are working on solving it.
Claiming that they should not be able to wiretap lawfully (or even otherwise) is a rather naive view of the world and lawful wiretapping does not imply dystopia.
> Those apps that provide end-to-end encryption are a problem
Are they though? There will always be a means to communicate without surveillance. Backdooring encryption is fundamentally incompatible with privacy, and will be abused - at scale.
For any real cases, the security services have a multitude of other ways of getting the information they need regardless - for example, they could gaining physical access to the target's phone and backdoor/bug it, or swap it out with a backdoored/bugged one.
The real problem with backdooring or outlawing encryption is that it lowers the bar to entry for governments - it makes it too easy, and such sweeping powers will be abused.
> Backdooring encryption is fundamentally incompatible with privacy,
That's a very bold claim.
Lawful wiretapping exists because it is accepted (and reasonable) that privacy should stop in specific circumstances, e.g. a criminal investigation.
As I wrote before seeing this in black or white is naive. Wiretapping is useful for society. The key is to have proper oversight.
By the way, to lawfully wiretap a phone the authorities only need the phone number: The operator will then duplicate all traffic transparently and on the fly. Bugging a phone really means illegal/covert operations these days...
You don't understand. Encryption is necessary because on the internet, there is no distance. Any web service today must be built with the knowledge that everyone, everywhere is attacking it all the time. The only defense against that kind of attack is an encryption that nobody can break.
When a weakness is introduced into that defense, even if it's a secret key held only by trusted governments and only (they promise) to be used in emergencies, that key is a method of attack that anyone, anywhere can use against everyone. The only defense is that the secret key remain secret, and there are an endless number of ways it could be comprimised, ranging from human error, phishing, or plain old brute forcing. A single key for all Whatsapp conversations is a valuable enough prize for criminals all over the world to invest serious money in custom hardware to crack it.
Currently, encryption schemes like Whatsapp's rely on generation of new keys for every message, making the value of attacking any one message limited. Without this defense, Whatsapp will be cracked, sooner or later. And once someone publicly reveals that key, every single message sent before that point becomes compromised. Maybe even publicly exposed. Would you be okay with that? Exposing your private message history to the internet because governments wanted to snoop?
No, you don't. This is a fundamental rule of how encryption works. You can't design around it.
If there is a key which decrypts everything, then there is a risk it can be stolen or guessed and used to decrypt anyone's messages. This risk does not exist in current, properly designed systems.
Any government which wishes to add a backdoor to an E2E encrypted messaging system must understand that they will be HEAVILY undermining the security and privacy rights for all users of the service which has the backdoor.
I think you are the one not understanding at this point. This has nothing to do with "how encryption works".
Whatsapp decided to go for E2E encryption for commercial and marketing purposes following all those privacy scandals.
They did not have to. They could have gone with P2P encryption, i.e. that their databases could have stored messages in cleartext. That way authorities would not have needed to ask for any backdoor. As already mentioned, this is how cellular operators work (that's obviously something governments wanted).
I think we might see legislation brought in in the future in order to force this.
Yes, but if you make it too easy, oversight fails or is worked around with legalese, and the powers are abused at scale.
> Bugging a phone really means illegal/covert operations these days
My view here is that it should be difficult - the security services should only be able to intercept someone's communications with a legally obtained warrant to do so (the oversight you mentioned); then powers are far more likely to only be used where the is a credible threat.
Lawful wiretapping does not imply dystopia, but the actual practice sure seems to. This is very likely to be abused by enforcement agencies and politicians.
> Claiming that they should not be able to wiretap lawfully (or even otherwise) is a rather naive view of the world and lawful wiretapping does not imply dystopia.
I hold the opposite view: that the idea that the evolutionary trajectory of the internet seems wont to continue to allow "wiretapping" (and, for that matter, the existence of a capricious state tout court) is the naive view.
The state is looking increasingly obsolete every day, and moves like this are significant leaps forward in solidifying that conclusion.
AES is far and away the most heavily scrutinized encryption algorithm in history. Of course that doesn't make it flawless, but the level of genius that the authors would need to hide a backdoor in it for all these years boggles the mind.
The implementation of AES in popular CPUs? Yeah, who knows.
Yeah, I would not even go as far as AES in your CPU. Just... any part of your CPU, or your motherboard, or your GPU. RISC-V is still not suitable for desktop, or at least not commercially available AFAIK, and POWER9 is too expensive. I want open software and open hardware!
> They denounce China and Russia for their human rights records in one breath, and seek to strip us of privacy and personal rights in the next - the hypocrisy is simply staggering.
Politicians, generally, are not hypocrites. That implies the guise of good faith. Do you really think the biggest beneficiaries of terrorism are likely to be honest with you? Humans just like being told what to do and what to fear and whom to be mad at, and politicians make eager use of this role.
From the government's perspective, they are to keep "us" safe. It's easier to do that if no one's safe from us. :)
Granted, that's a little over-ominous because the government's mission statement is to keep its people safe, and it's also elected by its people. Either of these two facts changing is the way bigger danger; backdooring centralized services is stuff that happens in the meantime either way.
If you look at the way elections work at a micro scale in the US, you will begin to lose confidence in the assumption that they are elected by the people. Political machines have huge influence in controlling who it's possible to vote for, and swaying low-information voters. That makes sure they have a lockdown on decision-making, even if they allow a few mavericks through the cracks for the sake of plausible deniability.
Even if this or that individual politician gets voted out, or even ten of them, it won't stop the machine's influence. They're still the ones who decide who the replacements can be chosen from.
I've said this before, but you can't outlaw the maths. There's nothing stopping anyone from rolling their own encryption, using well documented algorithms. You'd need to literally outlaw Wikipedia. Hell, even one-time-pads could still be used by truly motivated bad actors that wanted to communicate securely.
They can make it so if they spot you using it (for other than communication with approved banks and retailers and such, maybe), that's instantly something they can charge you with. Then go after a few people who're spotted using it on the Internet by traffic sniffing, meaning the only folks left using it are cranks and actual bad guys.
A total tangent, but the words chosen in this comment and how it relates to the logic behind the US War on Drugs are eery - down to "sniffing" out paraphernalia, "trafficking" illegal goods, end users are "cranks" and bad guys...
That some may get around it isn’t the point. Most won’t risk charges to share recipes with aunt Edna or family photos with Grandpa. Some will keep using it and stay under the radar but any time someone’s caught using it, even if it wasn’t part of a crime, it’ll be added to the list of charges against them. It’ll be dead for common use, and risky and annoying to use for those who keep to it.
But if you want to communicate with somebody else in an encrypted way, then you can likely do so with effort that doesn't even approach a hobby. Setting up a system requires more effort, but these singular cases are hard enough to detect and crack that the law enforcement agencies would never be able to do it. You might not even need encryption, because you could communicate in ways that are just obfuscated enough that nobody's going to check. Eg write with blocks on the ground in Minecraft or something.
If the goal is to catch malicious people that are trying to hide their communications, then outlawing encryption won't work. But it will give the government a good excuse to spy on people.
"I've said this before, but you can't outlaw the maths."
I don't think this is the right analogy - instead, I think a better statement would be "you can't outlaw random numbers".
A random number and the ciphertext output of a secure encryption algorithm should be indistinguishable.
I don't think I am being naive to think that even in our wildest dystopian nightmares there is no real path from (current jurisprudence in five-eyes jurisdictions) to (random numbers being illegal).
Not possible in the US unless the Supreme Court stops giving a crap about the first amendment. Incredibly unlikely no matter how unpopular free speech might become.
If congress and the president decide they want to increase the SC to 99 justices and add and additional 90 of their choosing then that's when the US Constitution officially dies but it can be done according to the law.
The attack on the legitimacy of the courts will fail this time around, just as it always has. The justices rule mostly with integrity and they’re mostly respected and protected.
It's just a hyperbole for packing the court. Allowing one president to appoint >50% of the justices so that the court always rules in their favor instead of waiting for seats to be vacated and re-filled.
You have to kill some of them to put the others in place. If you add 90 slots you can just fill the new 90 slots with your guys. That’s the theory of his statement.
It's easier to have public scrutiny on 9 justices than 99. There isn't enough time for the media or for people's memories to constantly remember to be angry at a few dozen people.
In many cases, (In my country for sure, and I bet this is common elsewhere) members of the legislative branch are approached by people from the intelligence community, who hand them the drafts for stuff they 'need'.
And you can bet those people do understand encryption.
IRC? If that were the case, and open source servers like UnrealIRCd were somehow backdoored in a way the community couldn't detect, you're still free to implement your own backdoor-free server and client if you want. The spec is freely and openly available as RFC1459.
Disagree on hypocrisy. US still affords significant freedoms and largely respects human rights. Whether your communications can be decrypted or intercepted on networks that are government regulated anyway is not hypocritical.
Residents of the US are still free to use whatever mathematical algorithm they want to encrypt their comms. Transporting OTP's across physical borders is trivial, and not technically illegally if not mistaken. Strong encryption is open source as you've pointed out. There's no law against using those open source libraries, nor any discussion to try to censor/outlaw them, AFAIK.
Policing the airwaves and internet pipes hardly qualifies as some major abuse of human rights, particularly when the best that the Intercept/Snowden crowd can come up with regarding things like "Parallel Reconstruction" is "abuse" of "surveillance power" to catch, e.g., methamphetamine traffickers [1].
> Policing the airwaves and internet pipes hardly qualifies as some major abuse of human rights
The leaders of today are not the same as those of tomorrow - sweeping powers to invade anyone's privacy and communications could easily be used for nefarious purposes. I don't trust our current leaders with such powers, much less potentially worse ones.
> Residents of the US are still free to use whatever mathematical algorithm they want to encrypt their comms. Transporting OTP's across physical borders is trivial, and not technically illegally if not mistaken. Strong encryption is open source as you've pointed out. There's no law against using those open source libraries, nor any discussion to try to censor/outlaw them, AFAIK.
Do you really think things will stay this way?
It seems to me that TFA is just the next step on a slow, but steady, march towards an authoritarian nightmare - once they've worn us down some more, there will be serious moves against encryption (it's happened before, and politicians have been bringing it up a lot in the past 10 years or so).
While I don't agree that your argument was high-quality:
Paul Graham:
I think it's ok to use the up and down arrows to express agreement. Obviously the uparrows aren't only for applauding politeness, so it seems reasonable that the downarrows aren't only for booing rudeness.
It only becomes abuse when people resort to karma bombing: downvoting a lot of comments by one user without reading them in order to subtract maximum karma. Fortunately we now have several levels of software to protect against that.
Thanks for clarifying and stating your opinion about the quality of my comment. However, seems a bit too broad-stroke to use downvoting for both the (lack of) quality of the comment and to express disagreement.
I do not personally have downmod capabilities, but I don't think it is necessarily too broad: If you interpret it as "People shouldn't read this", it seems reasonable.
HN doesn't want to encourage discourse, it wants to encourage worthwhile discourse, and the distinction is significant. Consider "people shouldn't read this" as short for "Having made the mistake of wasting my time reading this, I will flag it to help others to not make the same mistake."
Revise that slightly to 'it is a waste if time to read this' maybe.
I downvote quite rarely in HN over disagreeing with someone. Usually it is when I don't feel the reply adds any value, and is actually negative for the discourse.
That is, e.g doesn't reach me anything about the opposing position, or is argumentative without any substance, but distracting from comments that are more constructive.
Of course other people use different judgement. At the same time, HN doesn't hide comments to a great extent. Even 'dead' comments are optionally visible (with the 'showdead' setting) and quite a few of us read HN with that on. It's very rare for downvotes to silence people here who aren't actively disruptive.
Couple that with first enabling downvotes when people hit a certain karma threshold, and various other limitations, and HN is free of a lot of the downvote problems of other places.
That to me makes it less of an issue if people downvote to signal disapproval here.
Often initial downvotes will be countered when people feel a comment has been downvotes too much as well.
Assuming that US citizens are safe, that doesn't apply to citizens of other countries. So even it the US and the UK respect their own citizens' rights (Snowden showed they don't) they won't respect other people's rights. And then surveillance becomes a tool against a countries and policies the US and UK don't agree with regardless if these are a genuine threat or not. So yeah, it is kind of a big problem.
> Disagree on hypocrisy. US still affords significant freedoms and largely respects human rights.
If you are US citizen maybe, for the rest of the world. Definitively not.
Without being a lawyer, I'm pretty sure random drone strike on civilians in Pakistan, torture in Guantanamo or intercepting entire world communication is not an example of "respect of humans rights".
> Without being a lawyer, I'm pretty sure random drone strike on civilian in Pakistan, torture in Guantanamo or intercepting entire world communication is an example of "respect of humans rights"
This is a good point, I think. The US has an appalling record on human rights (aside from your examples, arming terrorists and overthrowing democratically elected governments spring to mind) - as long as we're talking about the rights of non-Americans.
Some of those individuals were guilty of little more than political activism but experienced real harm (e.g. deportation) thanks to surveillance overreach.
Disagree. The end goal has always been to make civilian use of encryption in a such a way as to prevent government from being able to intercept communication illegal. That’s where we will end up.
The US, by their own admission, is "killing people based on metadata" [0].
Which in practice is done by using machine learning [1] on huge data sets gathered with that global surveillance enabled trough Five Eyes.
Because the army of humans that could manually sort trough those zettabytes of data has yet to be cloned. All that ends up in the fancy-sounding "disposition matrix" [2] aka the USGs kill-list. It's just systems upon systems doing their thing and nobody is directly responsible or accountable for anything that ends up happening, like when yet another 30 Afghani farmers get "splatted" by accident [3].
Considering how this has been going on for close to two decades, and the US has a very convenient way going about the casualty statistics [4], I guess these Afghani farmers are just another rounding error in the "war on terror". Figures, because before that they were mostly considered biometric cattle [5] and lab-rats for fantasies about "full-spectrum surveillance" [6].
Note: Under Trump, the USG now even stopped releasing their shined up drone statistics. So it's pretty much impossible to know the full scale about what's still going on to this day.
>> while the U.S. won’t be able to use information obtained from British firms in any cases carrying the death penalty.
Lol. So I guess it cannot be used for actual terrorism. They can use it up until an act of terrorism occurs. Then, now that murder charges are on the table, they cannot use the same date source to catch the perpetrators? The US isn't going to waive the death penalty on every terror case. That isn't politically possible.
We have to just admit that US laws are increasingly incompatible with those of the rest of the world. Treaties are getting harder and harder to reconcile. The US needs to back off its departure from international norms.
Nowhere in the article is backdoor mentioned. Only sharing data that is encrypted.
US an UK governments can't just force FB to add backdoor. It would require new legislation. FB must play ball for that to happen.
Since WhatsApp has forward secrecy and end-to-end encryption, giving access to encrypted data is not easy to use. There might be some useful metadata that helps though.
There's also value in storing and archiving encrypted data because at some point in the future technology may enable them to decrypt it. Certain communications could prove quite valuable, even if old.
There is absolutely zero ways for WhatsApp and other to comply unless there is a back door. If the Senate votes on this treaty then it has the exact effect of legislation.
It would be just access to encrypted messages. Unless the law enforcement has access to the phone used to send the message, encrypted messages are useless.
I am weirdly.. giddy about this development. The more goverments try so hard to publicly force companies to, effectively, mandate backdoors, the more public will be aware of it. Added benefit is that FB will lose some market share.
The sucky part is.. my mom loves Whatsapp. She was able to use it wo any issues. There are few alternatives that she was able to use so easily.
All that said, I wonder. What is the breaking point for people surveillance-wise? I was so wrong a lot about the tolerance already..
I'm a bit more pessimistic there will be a backlash.
I've also been surprised how much a properly motivated non-techie can be. I've seen a few grandmothers who live far away purchase and setup a web cam and learn software I find confusing all by themselves.
I don't know. Even among my privacy-conscious friends there's hardly any talk about this sort of thing.
Snowden was in 2013. I think all that happened is a few people were briefly horrified, but in no time it was back to business as usual.
You can even point to China as an example of how bad things can get when the state can read everything, but people just don't seem to make the connection.
Unless your mom is planning to involve herself in illegal activities I don't see a problem. Most electronic communication channels are compromised anyway. If you don't want eavesdroppers, don't use electronics.
> Unless your mom is planning to involve herself in illegal activities I don't see a problem.
I assume you'd be happy to post all your credit card receipts and emails for all of us to see then. You're not doing anything illegal, right? That would be the only reason you don't want those things to be visible.
Sorry, I was too brief. I explained it further it in another comment. But my point was that you should presume all communications channels are transparent to nation states. I was not referring that it's okay to have a surveillance civilization. My point was that we have a surveillance civilization, and you either use the communication fabric supported or you don't.
What exactly leads you to believe that posting your credit card number on hacker news is somehow equivalent to not being outraged by the U.S. sharing _encrypted_ messages of suspected terrorists with the UK? Let’s not pretend the world is so black and white.
I was arguing against the idea of "if you're not doing anything wrong you have nothing to hide".
It's a bad attitude to have because allowing the government to have that power at all is a problem, even for terrorists and murderers and pedophiles. Because we don't know if people are those things until we investigate them, which means the government gets broad powers to investigate people to look for those kinds of crimes, and maybe find other people they want to get for other reasons.
I used to investigate computer crime. Yeah, my job would have been a lot easier if I had unlimited access to anyone's email. But instead I had to investigate and build a case without privileged access. It was harder, but I was glad that the system couldn't be abused.
Why should governments or law enforcement have privileged access?
Also because "good" democratic governments that respect human rights and personal freedoms deteriorate to oppressive tyrannies all the time. What's to say that your country won't?
Who'll protect you when they come knocking on your door to talk about that thing you posted about somewhere online, which used to be quite legal and no problem, but now is super illegal for some oppressive reason or another?
First they came for the socialists, and I did not speak out—
Because I was not a socialist.
Then they came for the trade unionists, and I did not speak out—
Because I was not a trade unionist.
Then they came for the Jews, and I did not speak out—
Because I was not a Jew.
Then they came for me—and there was no one left to speak for me.
To the several downvoters: My point was not that it's okay to have surveillance.
My point was that there is no uncompromised electronical communications platform. If you wish to communicate using anything having room for a man in the middle attack, it is very hard to proof that the medium is NOT compromised by a nation state or another third party.
You can communicate fairly securely without the messaging being intercepted by black hat non-nationstate third parties. I don't think anyone can guarantee security against nation states.
So if you want to communicate, you either communicate, or you don't, but requiring a 100% tamperproof platform is really hard.
These systems were in place in 1960:s. AFter that modern telecoms substrate was developed in close collaboration with nation state defence parties. As a non-expert, there is no reason to think any platform is secure from nation state eavesdropping.
It is an interesting perspective for me and so I feel obligated to respond. I do not think you are wrong about the compromised part. I do think you are wrong about thinking that is a desired state. The argument that innocents have nothing to hide is tired. God will sort them out follows closely.
Without more details I’m pretty skeptical, currently WhatsApp metadata gets turned over in warrant requests but no message content (metadata includes the times messages were sent and who they were sent from/to).
It’d be a big deal to force FB to modify WhatsApp for message content interception and I think it’d be challenged in court.
I would expect such an accord between countries to enable data sharing, not force companies to record more data. So I'm sceptical of the article's claim for now.
(disclaimer, speculation) The Times article says this has been in progress for four years, so I suspect that this is the culmination of https://wiki.openrightsgroup.org/wiki/UK-US_Bilateral_Agreem.... Which was originally planned to be encryption-neutral, but maybe things have changed in the past two years.
Given that Priti Patel is involved it probably is nothing like what's being described. She's exactly the sort of politician who thinks that mathematics can be redefined by a policy statement on TV.
Why is this article Facebook focused? Is this click-bait? Telegraph and iMessage also support unbreakable end to end encryption.
Also, if we're talking backdoors, why not just force Apple to unlock devices so that police can just read the messages in WhatsApp directly? That's something I could get behind as it means the intelligence community has to at least have the physical device on hand.
Do people have actual links to this treaty they're about to sign and what the exact wording is? I remember there being this 5-nation meeting in the summer with Canada and Australia that was specifically looking to lobby Facebook to open WhatsApp. Haven't seen anything new since.
But it is. I installed F-Droid by visiting a website, so it's definitely possible.
Maybe you're talking about Apple specifically? Afaik, there's no way to do that on iOS (though I read a related article about it recently, I think it was here), so if you want that, petition Apple. There's no law preventing it given that Android devices can do it, so it's just an Apple policy, and you can always install apps through XCode if you want to.
When the article says "share users’ encrypted messages", it's not clear if it's referring to the messages in their encrypted state, or is referring to messages have been decrypted before sharing.
Does anyone have any advice for what platform might be best to migrate to? I'm not overly concerned with group E2E, but it is a nice-to-have. Telegram and Discord seem like two of the most practical options, Keybase and Matrix seem like two of the most ideal from a security standpoint. I wonder what offers the best cross-section of features and user experience.
Yeah, I am aware of that. E2E isn't necessarily a hard concern, but E2E with backdoors actually feels worse to me than just not having E2E to begin with (I do wish Telegram were less misleading about this issue, however.)
Going by the original Sunday Times (UK publication) article[0], it looks like this headline relates to _upcoming_ data sharing legislation which UK Home Secretary Priti Patel[1], who took that office this July, is expected to sign in October 2019.
It's worth putting scrutiny both on the UK's desire to create these backdoors in social media apps (a path that many would argue could lead to eventual exploitation and abuse of that backdoor by unintended parties), and also WhatsApp's ability to deny the existence of such backdoors, now or in future.
A backdoor that will be exploited by criminal third-parties. Even if you (against all sense and reason) trust the US and UK governments, this is a move that will expose your data to criminals.
Not necessarily. Simplest implementation is to use government public key to encrypt session key and include that encrypted session key into a message. You can implement it with open source code, yet only government who possesses corresponding private key can decrypt the message.
I think that it'll be secure enough. If private company can keep their CA key in secret, government can do that too. Hardware Security Module stored in defended military place and guarded by soldiers, available for use only by authenticating of the few key politiancs.
As a lot of other things, it happened step by step, with the main catalyst being 9/11. We somehow accepted that it's an acceptable trade-off to have no privacy in the name of stopping "the terrorists".
"You're either with us, or a terrorist" Bush once said. And now we're at a stage where it's acceptable to say things like "if you've got nothing to hide, why are you so worried about this stuff?".
Ok, I'm starting to think this article is click-bait. There is no need for a backdoor with WhatsApp because messages are stored on the device. If police have the device on hand, they should be able to see the messages (assuming they can unlock it.)
> We disclose account records solely in accordance with our terms of service and applicable law. A Mutual Legal Assistance Treaty request or letter rogatory may be required to compel the disclosure of the contents of an account. Further information can be found here.
You can and should block your keyboard app's access to the network using a firewall. However, this will require root access and I've seen people jump at this point to say you absolutely should not have root access to your own phone, which I find pretty ridiculous and unacceptable.
Yes, this is exactly what I meant when I said that you either do what the Government says, or you go down. This is why I do not believe statements of privacy from Apple (such as we keep your data safe, 100% secure, etc.), for example. If they really have no backdoors or any other ways for the Government to access your data, then the Government would go after Apple. If the Government were to request your data, they would hand it over, right? Is this incorrect? If yes, why?
Apple fought a court battle over this and won. If the government just beat the CEO over with a rubber hose until they built in backdoors, why would they bother publishing a public treaty or trying to amend the laws?
I think the implication is just that Whatsapp has to hand over encrypted messages ...
I'd imagine if the police are then in control of the target mobile they can decrypt those messages.
Tbh, if my understanding is correct, I'm not sure I see any problem there. It's targeted at specific individuals for good reason (presumably following a warrant process) and requires both the messages (from Whatsapp) and they keys (from the individual's device). What's the problem?
This is essentially a worrying prospect if these developments are actually implemented or advance further. The users trust in the social media service is breached if a backdoor where to be placed in their products (It also defeats the purpose of the end-to-end encryption argument). If you reside in the UK and especially in London, things have just become 500% more Orwellian.
>Priti Patel, the U.K.’s home secretary, has previously warned that Facebook’s plan to enable users to send end-to-end encrypted messages would benefit criminals...
In London alone, it is not possible to pay for public transport with cash. A debit card/oyster is required but for anonymous travel, an oyster can be topped up via cash and reduces transport surveillance, unlike using credit / debit cards. Their reasons for doing this because it "benefits criminals" is echoing the "ban encryption" nonsense.
> The U.K. and the U.S. have agreed not to investigate each other’s citizens as part of the deal...
This I don't believe.
EDIT: Use a oyster card for public anonymous transport, refrain from using a credit / debit card for this.
>In London alone, it is not possible to pay for public transport with cash and a card is required. Their reasons for doing this because it "benefits criminals" is echoing the "ban encryption" nonsense.
Not strictly true. You can buy an oyster card with cash, you can load it with cash and use that. After a weeks worth of journeys you can return it and get your £5 deposit back and buy another one.
Not exactly perfect but it's essentially the same as having a burner phone.
although you anonymously purchase the oyster card, you can be de-anonymized the moment you scan the card, as face recognition links you with that card.
Sure, but the link to an ephemeral Oyster card doesn't give them anything that the face recognition doesn't give them already. If they can track faces, then they can track journeys even if you're paying in cash directly for every ticket.
you may slip up, or the system can always flag the id card if they're unable to make an automatic link to a card bought with cash, causing a human operator to get involved. If they see that you block your face everytime, they then can activate and access the total surveillance resources and flag the card.
I live in London.It has already gone way beyond of what Orwell could have ever imagined. However,despite of all the surveillance, London is the crime capital of the world.This is probably the best place for criminals,as unless you pull a machine gun on a crowd,not much will be investigated.
I live in the UK and I feel like this is the crux of the issue. I am an active member of a sports car forum in the UK and it's terrifying how little police does in case of theft, even when the house was broken into to get the keys. If you get someone to come out and write down a report that is a miracle in itself - in 90% of cases you are just given a case number and told to speak with your insurer, nothing is ever done. I know a guy whose Range Rover was stolen, he reported it, no one came out - then few days later found it parked in a car park nearby, he rang the police to tell them that he found his stolen car, where it is, and asked if they want to come over and maybe catch whoever comes for it(or you know, maybe take fingerprints and such)? Nah, he was told that if he still has the key he can just take it, they don't have any officers to actually come out anyway. I have friends who were robbed, burgled ,and literally nothing is ever done. There is zero police on the roads around where I live, I'm actually surprised people still follow the rules of the road because realistically, the chances of ever running into a police car are somewhere around zero.
It just feels like police in the UK has been gutted to the point that unless you are literally being shot/stabbed, there is not enough resources to actually help or investigate anything. It's a shell of a functional service.
There's some evidence to suggest that more policing leads to more crime (drops in policing are correlated with a lower # of major crimes). If you're going to be hassled by police anyway, why not go large? Hence the old proverb 'might as well be hung for a sheep as a lamb.'
We don't necessarily need more police on the streets ( depends on the area,as always), however there's a genuine need for more,who can solve cases.I've got endless list of examples, including my own situations,where police do either nothing,because the matter is deemed not serious enough or simply don't have resources to deal with it,unless it's an attempt to murder, political threats or terrorism.
I'm equally curious what this means for Signal and any other open-source encrypted services.
In the US, ITAR could hypothetically be used to make open-sourcing of cryptographic algorithms illegal. This technique is used for robotics software that could be dual-purposed for weapons guidance.
This case law only applies as long as the algorithm is not classified. As soon as any Original Classification Authority classifies the algorithm, it falls under a new category on the U.S. Munitions List and the government could then restrict its distribution.
Obviously, classifying something that has already been open source just makes it more difficult to use and numerous local copies will be retained, but it does make further distribution illegal.
This is the only mechanism I can think of by which the US government could kill Signal in its existing open-source form. It's ugly, but not unthinkable.
Proteus, I am concerned too. Sometimes fiction reflects reality better than we might like, I am thinking of William Gibson’s cyber punk future distopean sci-fi.
That said, freedom can also be in our own minds, creating a good life in an ocean of political corruption and increasing control of corporations/elites. It would take more effort, but I think I could also have a good life in Gibson’s sci-fi worlds.
US Citizens don't need to worry. All of your private information will be turned over to the NSA / US Gov. If the US wants a US citizens private info, they just request it of the UK's intel services. The UK Intel services get it from Facebook, and they then hand it to the US Gov. For your convenience.
It's interesting this is even necessary -- Australia's legislation to this effect passed last year could've been used just as easily (in combination with the 5-EYES pact).
(As an aside, there was meant to be a parliamentary review of the Assistance and Access Act earlier this year -- but I haven't heard anything about it.)
IMHO editorializing this as a ‘back door’ is inflammatory.
Users need to learn to not have any expectation of privacy when using social networks. It extends far beyond PII leaks.
Perhaps complaints should be directed at the root of corruption/abuse of such systems, at least as much as the capability itself. The capability is inherent to the technology just as tapping a phone line is.
Telcos have to provide interception capabilities (CALEA in the US) or they are not allowed to operate. This has been true for decades, most people do not know how that works but they do know it is possible.
As these social media communications services become more entangled in our lives perhaps they too could be deemed essential infrastructure and treated as utilities.
I see some comments are raising this topic with nothing to hide.
But people on both sides, do you think it's OK for you to have nothing to hide, but the government that rules you has something to hide?
For me it's not so much whether I hide something or or, but rather that if I'll be living under nothing to hide paradigm then the people controlling that should also live under the same. And actually not only the people but rather the institutions. So if an institution is having access to all citizens data that is being collected then the citizens should also have transparency on each decision made and each cent spent by this institution at any time.
These are free software, not controlled by a giant corporation (although both are limited liability companies; Telegram in London/Dubai, Signal in San Francisco IIANM); with the developer/company not having unencyrpyed access to your data.
Telegram group chats are not e2e encrypted, while Signal is notorious for being unreliable at actually delivering messages. Also, if you change your phone number the official Signal recommendation is to manually tell all your friends about the new number. WhatsApp just lets you do it.
I use both apps regularly but neither of them is perfect.
Yeah timely message delivery is a big problem for Signal. It doesn't happen often but when it does the delay can range from minutes to hours. A few years ago my wife and I failed to meet up as arranged and had a big yelling match until we looked at our phones half way through. The delay of some messages and not others led to almost exactly opposite beliefs about the electronic conversation we thought we had been having.
Out of interest what is your citation for Signal being nutritious or having messages going missing?
Anecdotally speaking, neither I or my friends that have been using Signal for years have every had any messages go missing, so I’m just interested as to why you might be experiencing this.
In Telegram regular chats are not encrypted by default either. Only special “secret chats” are and they only can be established between mobile devices, not desktop.
Another reminder that there are no secure platforms; there are protocols with (one hopes) the potential to be secure. The difference becomes more important by the day.
There are more-or-less secure platforms (which are secure up until the point where you literally try to blow up a whole country up or something). It's just that (oh shocker!) a closed-source platform held by a US corporate entity, with a privacy history of Facebook no less - no that's not a good platform to assume it will be secure. There is plenty of other much more secure options.
Nothing funny about the Daily Mail, it's the mouth piece of the reactionary idiots on the right (there are just as many on the left before I get tagged as a liberal) and has been for a century give or take.
This is the newspaper that had an editorial starting with "Hurrah the Blackshirts".
Other than using it as a way of keeping an eye on what some people in my society will believe (because the Mail told them to) I see zero merit in it's continued existence.
The Express and Telegraph were right facing, honest but partisan newspapers forty or fifty years ago. Now the Express is a racist comic, and since the famously and comically reclusive Barclay brothers bought it, the Telegraph is working on getting down to the Mail's level. The Mail are currently working on buying the i.
There really isn't much left on the right that you can rely on. Which is depressing considering most of our press is right leaning.
That's why I read Financial Times, however even they started leaning towards extreme left,instead of simply being a newspaper of facts with no political bias.
I fully agree with this. I know people intelligent people who used to be pretty liberal; years of reading this vile hate-rag seem to have poisened their minds.
Just a reminder that anytime they breach communications like this, it's because they want to be able to spy on their own citizens in order to subvert and control opposition political movements. For example, climate activism, whistle blowers (e.g. Snowden, Reality Winner, the guy that talked about the Ukraine deal if it wasn't an official CIA leak), animal rights, anti-imperialists, etc.
In every comment thread where quitting Facebook is discussed I find commenters saying they can't because Facebook is so indispensable for them. Maybe the same is assumed for criminals?
Not my area of expertise but would it not be relatively easy for someone to make their own app using open source and put it in an apk for ordinary phones?
I've messed around with the open source cryptography libs before and I don't see why someone could not do that.
There's even a fair bit of advice about how not to misuse such libs.
For some contrast, I find Riot (web and desktop) quite nice and usable these days. There are still a few rough edges, but it's good enough as a "daily driver" for me.
One example is that voice calls only ring on one side, sometimes (on the calling side).
Another is that when some rooms get updated, you can get unread notices from the old room and your client keeps telling you that you have new messages even if you don't.
Another thing that I think should be considered a bug, is that when you enable encryption in a room and you have multiple devices on the same account, you have to approve all your devices separately so that they will be able to decrypt the messages...
There are more bugs related to how the encryption process works and I stopped using encryption because of it (some might have been fixed by now...)
The encryption things are being worked on. Encryption is still in beta. I think riot fixed that problem with the unread notices. Have you reported the bug about calls only ringing on one side?
Umpyre is my messaging startup, if you have any interest in checking us out :)
We're a little different in that we're trying to solve more than just the privacy issues. We're also trying to solve the spam problem for anyone who receives a lot of random inbound email/messages (think open source developers, linkedin spam, anyone who's an influencer).
Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.
I believe the courts have ruled previously that your speech is not limited just because the government has access to it. Otherwise wiretaps would be illegal already. This is just an extension of wiretap law.
In both cases they need to be fought from another angle than freedom of speech, because we've already lost that battle.
To me, C8hWgwo5YeC5ojiOaDe3JVaLev+3zaZDfRVTAjvqNCA= is a lot less harmful than shouting "Fire!" in a crowded theater.
Banning C8hWgwo5YeC5ojiOaDe3JVaLev+3zaZDfRVTAjvqNCA= is akin to banning shouting in a theater, for any purpose. In particular, the banning of political speech is highly protected. One cannot readily show that C8hWgwo5YeC5ojiOaDe3JVaLev+3zaZDfRVTAjvqNCA= is not political speech (and indeed, here, it is).
Mods: Is this title accurate? It makes it sounds like an encryption back door, but from reading it sounds more like social media companies being forced to provide the cyphertext.
(with the authorities then being in a position to compel the end users to give access to devices).
> Social media platforms based in the U.S. including Facebook and WhatsApp will be forced to share users’ encrypted messages with British police under a new treaty between the two countries, according to a person familiar with the matter.
This might mean "...must share messages that are otherwise encrypted", it's not completely clear from the language used.
Is this literally a treaty that will be submitted to the US Senate for ratification and thus carry the force of law, or is it merely an executive branch agreement on cooperation? Does anyone have a link to the actual treaty text?
Or they want political cover to avoid revealing their capabilities, or to enable that data to be used more easily in court or for a wider range of offences.
I agree it would be good for police to be able to look into a criminal’s phone, but how will they actually achieve this without wrecking the encryption for everyone?
There are really smart and empowered people just waiting for really dumb but empowered people to make the move that opens the path to constant mass surveillance.
Police can only search what belongs to the company. If it the data is yours, and the keys are yours, then the company can only provide it by theft (if at all).
Most of what the federal government does is against one or another amendment (the 10th mostly) but gets slipped by under the commerce clause as that's how the system turned out.
> Social media platforms based in the U.S. including Facebook and WhatsApp will be forced to share users’ encrypted messages with British police under a new treaty between the two countries, according to a person familiar with the matter.
Would Apple's iMessage qualify? (I can't read the whole article because of the paywall).
In The Netherlands a government VPN solution was left unpatched for many month, exposing critical infrastructure to any third-party hacking nation state.
I'm not a right-wing anti-government nutcase. I do believe that the government will not be able to keep such a back door secure.
This comment is absurd. Your linked article even says:
> To be clear: the reason for this is not security. To the best of my knowledge, the Signal protocol is cryptographically sound, and your communications should still be secure. The reason has much more to do with the way the project is run, the focus and certain dependencies of the official (Android) Signal app, as well as the future of the Internet, and what future we would like to build and live in.
Beyond the author flat out saying that it’s secure- the title of the article is why they will not recommend it. It has nothing to do with it being compromised.
We are not talking about the protocol and simple one to one chats, we are talking about the app. They are several major weaknesses in Signal which have nothing to do with the protocol.
> Social media platforms based in the U.S. including Facebook and WhatsApp will be forced to share users’ encrypted messages with British police under a new treaty between the two countries, according to a person familiar with the matter.
Also, the article is unclear about whether the messages have to be decrypted before being shared, so they may just have to share the encrypted message.
Not the parent, but true in a sense. Communication is the act of conveying information from one person's consciousness to another. In the usual encrypted chat scenario, that path is unencrypted at two points (plaintext input, and plaintext output at the far end).
It is entirely possible though to have a true end-to-end encrypted channel if both parties are able to do the encryption in their heads without the plain text message ever being visible. A trivial practical example would be a single bit message with a single bit one time pad (agree ahead of time that if I say yes on the phone, it means no and vice versa).
Meanwhile, the US government (mostly Trump) is banning Huawei equipment over spying concerns. Apparently the only concern is that we weren't the ones doing the spying.
This really hurts our credibility on a government level outside the Five Eyes.
1) If someone wants to find something you are "hiding", they will anyway. It's always been like this. Encryption is never a protection against this.
2) Personally I still think e2e encryption is not a secure solution on operating systems that runs godmode 3rd party, eg. google play services: it relies on a key that can be stolen too many ways too easily. Signal included.
3) internet eons (20 years) ago we nearly all used plain text IRC, closed source ICQ, AIM, etc, apart from a few. Recently I started to question the usefulness of "encrypt everything": we do need a way to verify the content from end to end, but is encryption really the way to do so? Are there any other ways, signatures, hashes, etc?
4) All that said, I'm not surprised. Skype used to be p2p, until M$ moved it to server-client, because "battery life". Everything is moving back to the Eternal Mainframe in this cycle.
The landscape changed. More people use the internet, more spy agencies from multiple countries siphon traffic en bulk, information of higher value is exchanged over the internet, more untrusted parties are involved (e.g. wifi hotspots).
I mean go ahead, do everything unencrypted. But I surely won't entrust data to you if you're leaking like a sieve.
That is not what I wrote. I wrote "encrypt everything". There is valuable and useful use of encryption, I'm just not certain everything everywhere needs it or benefits from it.
It's a sensible default since the developers don't know when users will need it and users don't want to bother to choose for every single action they take. And they may not even know in advance that they will need it. An innocent conversation can quickly turn into something confidential or private.
That said, I do agree that p2p is preferable since it cuts out a central party that can be strongarmed by government being in control
We believe people have a fundamental right to have private conversations. End-to-end encryption protects that right for over a billion people every day.
We will always oppose government attempts to build backdoors because they would weaken the security of everyone who uses WhatsApp including governments themselves. In times like these we must stand up both for the security and the privacy of our users everywhere. We will continue do so.
Will, Head of WhatsApp