From what is written, I understand this to mean that users can select this feature for specific conversations. That not all messages are subject to this encryption.
I am not usually one for paranoia, but is anyone else becoming more suspicious about Facebooks motivations and involvement with gov? This feature is a massive boost for intelligence services dealing with unsophisticated actors. This reduces the haystack significantly, by users self flagging messages that may be incriminating. Multi-millions of FB messages must be sent every day, brute-forcing encryption on all of these is probably not possible. A small % marked as 'secret conversation'? Much easier.
Why doesn't FB just apply encryption on all messages? Surely they have the resources avail. Is it because this feature makes somebody else's job a lot easier?
If my suspicions are correct, what sort of threats would this pick up. Are serious threats likely to use FB messages with 'secret conversation' flagged to co-ordinate actions?
Hundreds of millions use Messenger from a web browser.
No secure way to verify code or store keys without
routing through mobile.
I wouldn't use the web version if they had not disabled Jabber access... and then I could use OTR.
This trend makes me very sad... IM networks are getting more centralized as ever. I don't feel thankful for this kind of development. End-to-end encryption should not be a feature of the service provider, but the client. The way this works with Whatsapp/FB/Google just requires us to believe that their proprietary client is actually doing what it promises. And for me that I don't have a smartphone they just don't even promise anything.
I just wish I could use XMPP or Matrix with my non-nerdy friends. There was this time Google seemed to not be evil with GTalk/XMPP, but then... The business cynicism that dominates this industry, allowing people to claim that they "connect people" at the same time that they put everyone in digital prisons, makes me really want to leave computing and go live in a cave.
Enabling interoperable chat to non-nerdy friends is basically Matrix's raison d'être. We're not there yet (see https://matrix.org/blog/2016/07/04/the-matrix-summer-special... for the current status), but if we don't provide it we will have failed in the whole mission to defragment these silos, letting users choose which service to trust without losing interoperability.
The good news is that the Olm end-to-end cryptographic ratchet that Matrix is in the process of deploying (https://matrix.org/git/olm) is built using the same algorithms as Signal Protocol's ratchet (although it's an independent implementation) - so we're hopeful that at least technically the window is open in future for using Matrix to defragment all the services who have adopted Signal Protocol (WhatsApp, Google Allo, FB Messenger, Signal itself etc) without compromising the E2E privacy. Right now this is total sci-fi, and won't likely happen (whilst preserving E2E crypto) without cooperation from FB, Google etc.
However, we hope to get Matrix to the point where they see that the longer term benefits of participating in a healthy open ecosystem outweigh the short term benefits of trying to lock users into a silo - just as email eventually interoperated over the early internet. That's a while off, but this is still the goal.
The best way to make sure this happens is to play with Matrix in its current form, and help us write bridges (e.g. https://github.com/matrix-org/matrix-appservice-bridge/blob/...) to interface as many silos as possible into Matrix. The more bridges, the more useful Matrix is, and the higher the chance of building an ecosystem which eventually Google, FB and friends will find attractive.
Thanks a lot for your work on Matrix. It is definitely one of the projects (like Syncthing, IPFS or GNU Social) that gives me enough hope to not run to the caves so early :)
On the serverside (which may be "local" depending on where your server is running), the history for the rooms you participate in is stored as a matter of course. Matrix operates by replicating that history between the servers participating in the conversation. If that history is e2e encrypted, any device with the necessary cryptographic ratchet state can decrypt it. (Note that rooms may be configured to change ratchet frequently to deliberately stop history being decrypted multiple times, providing a form of PFS).
On the client side, it depends on the client you're using. Many native Matrix clients and many bridged clients give the option to store local history (whether it was originally encypted or not). Smarter clients will store it encrypted at rest by whatever mechanisms the OS and hardware provides.
I honestly find this centralization as worrying as mass surveillance itself. I'm as afraid of the Facebooks of this world as I am of any government, and I don't want all my communication locked in with one company.
This is why I will not use or recommend Signal. Moxie's anti-federation stance is unacceptable to me. It's replacing one problem with another.
I don't like this either. My long time wishes have been to have federated systems for instant messaging and for social networks. Neither seem to be getting any traction with the big and rich corporations actively working against any kind of interoperability. All the pro-privacy solutions depend on getting all your contacts into another walled garden, albeit a nicer one than the likes of Facebook and Google.
Beyond federation, I would also like the solutions to have good features and usability. Right now I'm bouncing between a few walled systems:
1. Telegram - really fast development pace, poor crypto, E2E encryption is only for chosen chats and single device.
2. Signal - slow to deliver messages, not multi-device and has usability bugs and issues. I update it somewhat regularly and try using it, but still go back to Telegram because basic expectations aren't met.
3. Wire - I discovered this recently and like the feature set (it's a lot richer than Signal). It claims to use the Signal protocol and has support for voice and video too. All chats are E2E encrypted (unlike Telegram where non-secret chats are by default not encrypted on the device or on Telegram's servers), and it has multi-device support with message sync (only from the time the device is joined to the account). Clients are available for different mobile and desktop platforms. But this one also has poor usability in getting started with it and has simple things missing - like no message delivered or message read indicators (the latter could at least be present as a user selectable option for privacy). I don't know how slow this is to deliver messages, but it's definitely not as fast as Telegram is. Not knowing if a message reached or not is unacceptable in this era.
I'm still waiting for some more strong solutions to appear in this space. Seeing that more platforms are adopting the Signal protocol, it would be great to have some standardization in user identification and federation. I actually do not want any of these solutions to be completely free and wish they would provide some way to help them monetarily (at least for the people who do want to help them). I feel repelled by the "free forever" and "we'll sell premium things later, like stickers" parts. That also brings suspicions about the motives of the company/developers and the future viability of the application or platform. At least allow people to donate to you so that you feel some kind of return obligation for all users!
Makes me wonder... what's the current total number of active XMPP users (for chat, I mean, not for Android notifications)? Is it still bigger than the number of Signal uses?
I have a feeling that federation is one of those Good Things that reduce the potential user base until it isn't good any more.
You can't run your own signal server. All accounts use phone numbers in the same namespace as ID, and all messages go from the phone to opensystems.org, further on to google, and from google to the destination phone (with lots of encryption being added and removed at various points). This has advantages (it's difficult for the Man distinguish a received signal message from other android notifications) but also disadvantage (moxie can do traffic analysis and you can't do anything about it).
You can totally run your own server for yourself and your friends: https://github.com/WhisperSystems/TextSecure-Server (you'll have to change the server's URL in the client's source as well and compile it yourself, but that's really easy)
What you won't be able to do is federate with the official servers.
Oh, and there's also a WebSocket transport (used by the Desktop client) that doesn't involve Google. That just doesn't provide a pleasant experience on mobile.
Yeah, so instead of being in Whisper Systems' walled garden, I can set up my own and ask people to install Rvense's Magical Messenger App. Sit there in my treehouse with a bucket on my head and a NO DUMMIES sign or something.
You can sideload apps without a developer subscription. It's annoying but works. But you have an unsolved update problem on both Android and iOS. You really shouldn't do this if you're not 100% sure of the implications.
> you'll have to change the server's URL in the client's source as well and compile it yourself, but that's really easy
I'm sorry, but is this a joke? "To not use a centralized server that you can neither audit nor trust, you have to recompile the client, but that's easy?"
This smacks of "oh, PGP for email is fiiiiiine." To say nothing of the silliness of the inability to federate.
No, it's not a joke and you shouldn't treat it as such. Non-technical people really shouldn't be whining that their "free service" doesn't cater to a click-and-run crowd. The source is available to the public to create their own, and changing a URL in the code is a single regex command away.
Don't casually disregard him because you or others can't understand basics of doing what it takes to alter and run a service in your own private space.
I don't casually disregard him. I thoughtfully and with consideration disregard him, and you as well. The idea that there is a priestly-class of technical people and "non-technical people shouldn't whine" is silly. This is not for technical people. This is for non-technical people. I've been doing this stuff for twenty years. But me being able to do it doesn't do a damned thing to help the people who actually need help.
I don't need Signal to communicate with knowledgeable people. We need something to communicate with everyone else.
And yet the same argument has been made time and time again against SMTP. Let's stand back for a second and understand why SMTP has stood the test of time. Yes it has flaws that allow the "first contact" problem (ie spam). But the people working on SMTP at least understand the weaknesses and advantages of that.
The moment you open the door for federation, the protocol is written in stone forever. All it takes is one server in the federation network with a substantial user base that chooses not to update. (See SMTP).
OWS decided that relinquishing the ability to force updates (i.e. away from a broken cryptosystem) would sacrifice too much in the way of security to be consistent with the project's goals.
I understand that. I don't think it's a malicious decision. I think it's a wrong one. I'm not criticizing Whisper Systems, I'm criticizing the tech-priesty stuff out of the post I replied to.
I didn't say this was a good way for normal users. Normal users don't care about federation and don't want to run their own server. But for people on HN it should be easy, and if you and your hacker friends don't trust moxie you can do it. I never said you should, just that's it's possible and not hard.
Compiling the client is much less daunting than running your own server, so "easy" seems like a fair description in this context. I don't think there's a large intersection between "People who can't easily compile the client" and "People who would run or audit a secure messaging server."
My response is that we just have to try harder. He's focusing only on encryption and ignoring the massive social problem of one company or a few companies having a monopoly on digital communication. Just like whether or not I'm spied on is irrelevant to whether or not I have the right to private (encrypted) communications, it doesn't matter who has monopoly or what they use it for. We should not be building infrastructure that encourages monopolies, and unfederated services are by definition monopolies.
Did you just coin that? It appropriately captures what is going on, but without having the positive connotation that comes from a 'walled garden'. I love the phrase.
As an example outside of messaging, I have a fitbit and 'digital prison' so aptly describes what happens with my personal health data. I can't get my heart rate data out of their prison, because the fitbit warden doesn't see it fit to grant me the privilege to access my raw data.
"Walled garden" is more appropriate than "digital prison". No one is being sentenced involuntarily to these enclosures, they are voluntarily choosing to accept them because of what is inside them.
"Cult compound" is probably how I would describe it. Sure, you can leave, but there is immense social pressure to continue in what has become the norm, despite there clearly being something not okay with what is going on. And good luck convincing others to leave when you do.
Never heard "cult compound" in this context so far and by thinking about it I really think it better fits then "walled garden". "Walled garden" sounds like you go there because of its beauty or for getting the best crops while in reality you go there because it's the most crowded place.
> "Walled garden" sounds like you go there because of its beauty or for getting the best crops
Which is exactly why people go to them.
> while in reality you go there because it's the most crowded place.
Directly true of social networks (where the "crop" is "people you can interact with through the network"), perhaps less directly true of some other walled gardens (though network effects are a thing.)
But also directly opposite of what you'd expect from a "cult compound", which people go more to escape what is most popular, than to experience what is most popular.
> "But also directly opposite of what you'd expect from a "cult compound", which people go more to escape what is most popular, than to experience what is most popular."
Agreed, I thought of this point while sending my comment but wasn't sure how to put that in words. So maybe it's the "most crowded garden party".
It's also one hell of a loaded term with things like Heavens Gate and Jim Jones having existed. To my knowledge, Facebook has yet to cause a mass suicide by people worshiping Zuckerberg.
Facebook has convinced millions of people to give up vast amounts of private information to an apparatus that would make the Stasi or KGB drool.
Part of cult indoctrination is giving up personal and private information, documents, secrets and property to participate and become part of the whole. Meanwhile, leaders profit from the property and information given up and use secrets to blackmail or breakdown an individual's identity so they become dependent on the group.
Unfortunately your use case, while valid, is for a very small demographic. In general people want the guaranteed experience that locked in products give them.
A lot of people like to sideline complain about that, but tech is no longer its own customer -- there are billions of users who have different preferences than us and they are a lot more lucrative.
People opted into several of these services before they were "locked in", many of whom never noticed the switch of XMPP backplanes in Facebook Messages and GTalk. Some of them haven't even seemed to notice the loss of a few technically minded friends from their messenger windows, changes in branding, and or even really changes in apps. The ambivalence is not a preference or a "want" for a locked in experience. There isn't even a preference for a type of experience: the general human thought process is "I want to talk to my friend Jim" never "I want to use [Facebook Messenger/Google Hangouts/SMS/smoke signals] to talk to my friend Jim".
I've seen people that ritualistically open certain apps to talk to certain friends and networks of friends, but have no idea what apps they are using beyond the background navigation needs of "the one with the fuzzy green icon on my last page" and "the blue one with the annoying notifications".
Locked in platforms are a consequence of network efforts, not a "preference" for some mystical "guaranteed experience": the experience and the platform don't matter if the social interactions aren't there.
The diaspora of communications platforms hasn't hit home to the average consumer yet, and it's currently background inconvenience that people are using five to ten different apps to communicate these days in some cases, but that doesn't mean average consumers are entirely ignorant of the situation either. (To some extent that's why OS-level notification systems have become so important to the average consumer; at least when you have a half-dozen messenging apps all the notifications arrive in the same place.)
This is what I thought. If one were starting a new messaging platform, how would you implement the Signal protocol from scratch I wonder? I'm assuming for people who don't have strong security backgrounds, this means dissecting the Signal source code from Github.
> I'm assuming for people who don't have strong security backgrounds
That's already a bad start.
You mean inventing Signal from scratch (which is rough) or incorporating the libsignal protocol into a new messaging app?
All of the libsignal repos have a good readme that explains init [1][2] , so you can start there. Browsing Signal source is helpful not so much to understand the protocol, but to see if any special precautions were taken against side-channel and other implementation pitfalls.
I mean, you could contract moxie (hey moxie, what's your price?)
EDIT: there's also this [3] independent implementation of libsignal in golang that tries to make some targeted modifications to fit their need. Can't vouch for its quality, but it's an interesting effort nonetheless.
It may be a bad start, but if we want to see this implemented in many products at a large scale, you'll have to expect that not everyone is a security expert.
I swear HN has become riddled with people who want to be contrarian for the sake of being so.
No, I'm just repeating the oft-said maxim of 'Don't Roll Your Own Crypto' [1] (which applies in the form of know-what-you're-doing-while-implementing-someone-else's-crypto) while emphasizing that you should probably be a domain expert in the domain you're developing in. If I know nothing about High Frequency Trading or Oil Exploration, I wouldn't want to be coding for it at all, and nor would my employer.
I believe it's a reasonable expectation that people who implement secure messaging be domain experts in crypto AND messaging.
Getting high-grade security right is exceptionally hard.
There's a saying about sex: make one mistake and you have to support it for the rest of your life. Security and cryptography are orders of magnitude worse. It's bad enough that errors compound, but it doesn't end there. You can have subtle and counter-intuitive failure modes where a single step outside the happy path is enough to completely annihilate the security of your system.
I feel there is no analogy that could capture the absurd complexity and catastrophic failure potential.
To give some background - I've been working with applied crypto since 90's, and professionally (on and off) since early 2000's. That experience is still next to worthless: I know for a fact that I am not good enough to actually implement anything that could withstand the attacks of a motivated and well-funded adversary. (Or even that of a bored PhD student.)
The best I can do is find tools and components that have been battle hardened by the handful few exceptional professionals. At least that way my hubris shouldn't amount to too much damage.
You shouldn't have many different people implementing critical crypto code; that'll lead to horrible broken implementations and compromise of security.
> "We don't want to disrupt people's current experience."
You don't say? (not you, Facebook) How about the dozens of times Facebook disrupted the user experience of the service for its own benefit? How about the dozen+ times it changed people's settings from private to public, after people previously manually enabled a certain setting to be private, or after having a setting by default as private initially and letting people believe that such action is private? Wasn't THAT disrupting to users' experience?
Of course it was. But it benefited Facebook, and that's the difference here. They just don't want to "disrupt" the experience in a way that also hurts the company's bottom line, even if it's better for users.
In other words, it's just a weak excuse for not doing it by default, or at least allowing people to always set it as default (although knowing Facebook, I'd probably worry that they'd revert it back to non-E2E without even making it obvious that it did that. Announcing a new privacy policy change doesn't really count).
> No secure way to verify code or store keys without routing through mobile.
Weeeeeeell, not quite. Every device and browser the user uses can get it's own private key, and then you use Facebook's central servers and SSL to exchange keys.
If I don't trust Facebook for key exchange, then I can't trust them with their app. Any encryption they implement, they can trivially circumvent by putting a backdoor in their app.
But if I trust their (closed source, frequently updated) app at all, then I can trust them to relay people my public keys. Especially since MITM can be discovered by comparing hashes over a hard to manipulate channel (telefone, video, or IRL).
And if you're really paranoid, you could think about using an open-source app and letting a trusted third party handle the keys.
Yes I completely agree. Actually, what is the difference in security between an app that generates a key pair on first use and exchanges the public key via a 3rd party servers and a web application doing exactly that. As long as the connection is secure and you trust the 3rd party for not having you send the private key to them, the model seems reasonably secure for both scenarios.
Though public key exchange can be improved on mobile by better direct communication capabilities like barcode scanning, rfid, bluetooth etc.
Am I missing something or is this more of a "JS is not reliably fast on all devices so we rather don't" kind of thing?
I assume the issue is more to do with private key storage on the device. JS provides several local storage mechanisms, but none of them have cryptographic guarantees and all are potentially vulnerable to attacks like XSS and "attacks" like development console disclosure. There's been calls for the Web Application Platform to standardize on an HTML5 "Key Store" for cryptography, but thus far it doesn't sound like any consensus has been managed to be reached on how that key store would operate.
Certainly, there are mitigations such as very short lived private keys and relying on existing sandboxing and XSS protections browsers already have to do for JS local storages, but it's easy to understand how from a paranoia standpoint there's no guaranteed safe key store just yet in a browser, especially not one backed up by OS-level security guarantees as one would be able to use on mobile devices.
That's all fine, but why isn't there a setting somewhere, where I can just knowingly choose "always use end-to-end encryption" at least?
The way it's implemented now still makes it a pain/inconvenient thing to do when you want private conversations. So let's face it. Facebook just wants to get away with the minimum necessary to convey that it cares about privacy, while knowing that only 0.1% of the conversations will ever be encrypted this way.
The fact that "We don't want to disrupt people's current experience." did not previously outweigh other considerations does not mean it is not a consideration. It does imply an upper bound on the weight though.
I recently started using Wire and liked the richer feature set. But I'd say it still has some way to go on usability, features and speed. I hope you look at Telegram as an inspiration on those (not on crypto though, where you seem to have the one commonly accepted as the best).
While it might be disappointing, I firmly believe this is a technical and business decision, not a conspiracy. If you look at the features Messenger offers, and the direction the product has been moving, it relies heavily on server-side technology.
Facebook have added contextual ride-share ordering, person-to-person payments, bots, etc. Unlike WhatsApp or Signal, Messenger also still works over the Web without an app or a smartphone-based login: how do you implement credible E2E over the Web, without using the phone as a crutch? How do you allow multi-device support, with a message history, and do E2E on all conversations?
I think they've (rightly, IMO) assessed that most people value those features and convenience over 100% E2E conversations, but want to offer the option to those who don't. Besides, if Facebook were really completely in bed with the surveillance state, why would they have just rolled out full E2E on WhatsApp?
> Unlike WhatsApp or Signal, Messenger also still works over the Web without an app or a smartphone-based login: how do you implement credible E2E over the Web, without using the phone as a crutch? How do you allow multi-device support, with a message history, and do E2E on all conversations?
Wire [1] does multi-device E2E encryption, sync of message history and allows users to use phone numbers and email addresses as identifiers. It also uses the Signal protocol.
Note: I do not work for Wire nor am I associated with it in any way, except as a user. I discovered it only recently and am trying it out, in addition to using Telegram as my most frequent client and Signal.
> I am not usually one for paranoia, but is anyone else becoming more suspicious about Facebooks motivations and involvement with gov?
I worked for Facebook; I am friends with the people who developed this: I would like to reassure you strongly (well, as much as an Internet stranger can) on their motives. They are the good guys, and this was develop with people being spied on by abusive governments in mind — because those people use Messenger and would like to do it for those conversations too. Don’t trust me, but reach out to them if you are curious: few people appreciate their work, so they are generally happy to talk (about published work; unannounced products are very much off limits).
Facebook does collaborate with governments when the messages are not encrypted and the crimes are clear, and the court issued warrants. I seriously doubt any of those are political dissidents. Facebook engineers receive very generous poaching offers all the time, and they would have minimal economic damage from leaving the company over something like this. The shit-show from engineers leaving over that would massive (and many engineers are true believers).
I wasn’t in the company six months ago when that project was decided, but from my experience of the internal culture, I know that there was a debate on whether this feature could cover crimes that Facebook would object to — and the need for protecting the good guys obviously had the upper hand.
> Why doesn't FB just apply encryption on all messages?
> Surely they have the resources avail.
Scaling. I know nothing about this, but I have no doubt that this is the main reason. Any tiny amount of extra memory, computation, etc. times a billion becomes massive. Facebook struggles building enough capacity for all its services: the data center expansions you hear about are done a break-neck pace, to meet service expansion deadlines. The engineers working on this are heroes internally, and the units they use are unheard of outside of astronomy.
More specifically, they probably want to test some scaling aspects, but it might be unethical to use the usual approach of A/B testing.
I'm former FB Infra and agree with your first point re: motives. There's a lot of true believers working in the security space at Facebook, and I have tons of respect for them. I used to work closely with many of them.
However, scaling and/or capacity is not the reason E2E encryption isn't applied on all messages. The crypto operations are relatively trivial in terms of cpu.
This comment summarizes what FB's CSO said about why they are not launching E2E broadly yet. It boils down to usability concerns. Sounds like they are working on it:
Usability with Facebook M, much like Google Allo, will require E2E encryption to be turned off in order to take advantage of those features. Alex didn't mention that, but I think that's the real reason. Also FB wants to know what's going on in your conversations. That metadata can be used for advertising.
This might sound offensive, but please don't take it that way, I just don't know how else to phrase it:
I have no reason to trust you, or them. Just because you think people have good motives, doesn't mean it's true. I'm sure there are great people working there, but I'm also sure there are shady people working there. Just like at any big org. "Even though you don't know me, trust me, these guys are cool" arguments don't really help anything.
Of course I won’t be offended — that’s why I wrote, if that mattered to you, to reach out to them. Although, as someone pointed out, that feature was developed by a team mostly in London.
Hi. To move all messages to be E2E encrypted, we need credible solution for web clients and every other platform, including old feature phones. This is easier said than done, but is something we are thinking about.
Secret Conversations is a step in the right direction.
I think people underestimate just how ludicrously hard it is to provide an encrypted experience that's as good as plaintext. Even showing a chat on multiple devices becomes a hard problem. I agree with you that it's a step in the right direction, and Viber and Whatsapp have a much easier problem to solve, given that both only support device-to-device messaging. The only app that supports multi-device chats that I know of is Silent Phone, but I admit to not being very up to date with the instant messaging landscape.
I'm building one. When you start with the idea that it's going to be encrypted and that you won't know what users are sending back and forth you have to make some concessions but functionally I don't think the average person would even know that my application is encrypted from a UX perspective. You need to handle abuse client-side, it requires a bit more thought but I'm sure that's not beyond Facebook.
Feature phone is actually oldspeak. It's what we called cell phones with features back when most cell phones didn't have features. Then smartphones came along and reset the expectations for what features a phone has, leaving feature phones in the dust. But there was a time where feature phone meant something positive.
Because "phone" is the primary feature. Phone-feature phone sounds silly. You can say lack of features phone if you like, most people prefer shorter phrases.
> Why doesn't FB just apply encryption on all messages?
The same reason Gmail can't work with end-to-end encryption--they want to advertise at you based on message content.
I highly doubt there is any government intervention in FB's business strategy, but there seems to be plenty of cooperation after the business decisions are made. (The same is largely true with Microsoft, Google, and yes, even Apple.) It's not really a conspiracy, or a matter of paranoia--it's been very widely reported for a while now, and people just generally don't seem to give a shit (myself increasingly included).
EDIT: I do care. But I think (a) people need to take privacy into their own hands, since companies will never be incentivized to do it and (b) we've entered a new cultural era, where the levels of privacy enjoyed in the past are no longer socially normal.
> The same reason Gmail can't work with end-to-end encryption--they want to advertise at you based on message content.
I wonder how they'd do if they were more open about it. "You're getting Gmail for free because we read your email and advertise to you. However, if you want to pay for a premium account (or Google Apps for Work) then we won't advertise to you, won't read your email and we'll even make end-to-end encryption easy and convenient".
I mean, it seems like a good compromise. I'd be happier with the rampant advertising and profiling going on if it was only for signed-in users and everyone was given the choice - use it for free or pay and get guaranteed privacy and no ads.
It would be a (financially) bad choice for corpo. And answer why, is the same in many similar questions 'why can't corpo do this-and-that'.
Here it is (google as an example only, simplified) the answer:
* Put two googles side by side. competing.
* one is a current one, earning money from ads on you being product, and keeping it under the radar (although in fine print etc etc)
* second is the one devised: premium accounts plus free w/ads
* wait 5 years and observe by market efficiency evolution which of the (competing) companies wins. The first one. In this specific scenario, the second loses because time and money spent on 'premium accounts' will not be compensated by revenues from it. While at the same time, the first google, spending this capital difference purely on the ads department will make its ads department (here 100% of it) much better than ads part in second company. hereby winning on the market.
In general (as the google example is SIMPLIFIED, due to the alphabet scale. Please don't use google apps FOR WORK (emphasis mine ;) as an counter example - it's B2B product :) it is because: we, humans, don't like companies who use fine print etc, yet those companies win case by case with the 'moral, pure pricing and fine ethics' companies BECAUSE OF BIASES AND ERRORS (+) OF HUMAN BRAINS during choice making shopping, exploited day by day.
Thats why.
(+) read stupidity also known why-I-buy-overpriced-sweets-at-checkout ;)
That analysis only makes sense if that is the only product of the company. Google would still be spending way more on making ads better than a pure gmail competitor just because it shows ads for so many other things.
Right, but the phrasing above wasn't "We can read your email," it was "We read your email." What actually happens is more relevant than what they could do if they wanted.
What if he's scored as a threat and reported to the FBI? A human can understand sarcasm, jokes, or hyperbole. An algorithm is going to see scary words and add them to the 'dangerous' score.
I think if companies were fully transparent with what they're doing with the data and made sure almost all of their users knew exactly what they're doing with it (meaning, it shouldn't just be in a privacy policy no one ever reads, but it should be actively "promoted" somehow within the service, everytime the data is collected and used) people would be a lot more careful about what data they share through these services.
This is exactly why these companies don't want to be more transparent unless they absolutely have to (like what the EU is doing to Google), but it really should be more regulated by governments, because I think it's a very "fair" thing to do - sharing everything you're doing with someone's info. It's not about restricting the data collection through regulation, just being transparent about it. That might lead to less useful information for the data collectors, but people should be informed and it should trump everything else.
My intuition is that this will either lead to people going to other providers who do the same but do not mention it or lead to people not caring.
I mean, a large number of people here do know that machines scan their email. In what manner do the majority act? That shows us the revealed preference.
ads is a by product. The aim of google is to build AI. For training AI, it needs access to all the data about you. Maybe, google has already succeeded and the singularity has already happened ;-)
End to end means that your computer is the only one that can read your gmail email, because the key is stored on your computer and not in the cloud. The whole reason we all moved to webmail from pop3 is to have access to our mailboxes on any device. If you put the key in the cloud and make it decryptable by some user-known data (like a password), there's very little point in having the key - brute forcing a user's key becomes very simple. Also, any legitimate service provider is going to provide a backup way to "recover your password", which means they have a shortcut to decrypting your data. So you can have one or the other, not both. Either you have simple, easy access to your data from multiple devices, or you have secure end to end encryption.
Text advertisements are cheap, bandwidth-wise. I wonder if they couldn't just send everybody dozens of advertisements, and decide on the client side what to show. Or even, download a dozens of MBs of graphical ads with each app update / the first time you visit the site.
You could go further and do something like, email sent by people to people is off-limits and will be end-to-end encrypted and not looked at - but (almost) all automatically generated email is fair game and will be processed. Google already does that with Google Now (and shows you your flights, package deliveries etc.). I really prefer them making more explicit what they look at and what not.
> Text advertisements are cheap, bandwidth-wise. I wonder if they couldn't just send everybody dozens of advertisements, and decide on the client side what to show. Or even, download a dozens of MBs of graphical ads with each app update / the first time you visit the site.
Shh, don't tell everyone about my next project. I think it can be done with images as well sending 5 of them for various demographic groups would be doable and should create enough noise (one tampon ad, a video game ad, an ad for amazon, a mountain dew ad, and a retirement plan ad for example). For those who care not for anonymity, but more for bandwidth they can opt out and only download the ad's they need.
> The same reason Gmail can't work with end-to-end encryption--they want to advertise at you based on message content.
Facebook can do fine without that information: they have more information than they can develop ad services for at the moment; they could use a lot more engineers though. The goodwill from the dev community in allowing appropriate targeting is far more important to them.
Two good examples being location and language: you can advertise for people in a certain place or who speak a certain language, but there are many unserved combinations, like tourists, language minorities and commuters. In my case, I’d love to see ads on learning Swedish: Facebook can see some of my friends speak it, and this is new.
Source: worked for Fb, but in Engagement, not the Ads team; offered a lot of targeting suggestions, but most where “presumably not the biggest opportunities (they) could work on”. At Facebook’s scale, we are talking really big numbers.
I think the more likely scenario is they know the value of the conversations they datamine, but don't want to lose customers to these other message apps that tout their encryption. So they do the bare minimum to support encryption while still having access to most data.
Over here in the real world, Facebook does not need to worry about "[losing] customers to these other message apps that tout their encryption" because the latter are not even in the game. There are only a few messaging apps that matter, because the value of the app is in its network and who it can connect you to rather than some feature checklist for the HN crowd.
Dropping secure E2E encryption into an app that is deployed on hundreds of millions of phones across the globe is a game changer. Period. Once you climb down from your lofty ivory tower and consider the impact this will have globally then perhaps you you will be a bit less dismissive of the goals and efforts of the team that convinced a company that lives on data to delierately blind itself to some for the sake of their users privacy and security.
I can't tell if you've deliberately misinterpreted evgen's comment or not. But this feature is being deployed to hundreds of millions of phones. Even if only 1% of people use this, that's millions more people using E2E secured communication. Hardly "barely anyone".
This does seem a likely scenario. Unfortunately all the stories of over the last few years have severely eroded the benefit of doubt that I give these companies, which is a great shame. I imagine others due to this, approach similar moves with great suspicious.
"brute-forcing encryption on all of these is probably not possible. A small % marked as 'secret conversation'? Much easier."
They probably wouldn't have to brute-force them. Backdooring's still quite possible - the NSA are perfectly capable of compelling Facebook to install a keylogger in their app.
Given what modern science knows about mathematics and computer science, you cannot brute-force decrypt messages encrypted by Signal Protocol. Not millions, not hundreds, not tens, not even one.
This makes a lot of assumptions, like the NSA hasn't broken algorithms we consider secure, that they're not somehow sharing keys, or using a gimped algorithm on purpose, etc.
Sure, but those assumptions are implicit in the statement I replied to Multi-millions of FB messages must be sent every day, brute-forcing encryption on all of these is probably not possible. A small % marked as 'secret conversation'? Much easier.
If you don't trust Facebook, you don't need to evaluate the quality of their implementation of secret messaging, you just shouldn't use it.
NSA mostly likely hasn't broken AES or ECC. And the rest of the arguments don't apply in this case; the parent was talking about hope having a few users use encryption makes it easier for Facebook to compromise security. Using gimped algorithms or sharing keys would affect both scenarios equally.
They have taken exactly the same approach as Google with the Allo service. This is most probably done because people want to use Chat Bots and services and these do not work with E2E encryption. Well they could work but it would be dishonest branding as the messages would leak.
It could work but as I said it would be dishonest. If a bot can read your messages then you have to trust a third party to not leak/store them. I believe that he goal of E2E encryption is to remove the need to trust somebody. (Of course you still have to trust FB that they do not backdoor or hinder the encryption but you do not have to trust the third parties)
I do not speak of chatbots such as Cleverbot. For example your bot will listen to your messages and provide contextual information, such as theatres airing a movie you are currently talking about. The bot also probably needs to keep track of the context, which probably means keeping the chat history for some time. In order to learn the chatbot needs to keep the history forever and scan it for ML purposes. All of these mean that the bot has to keep the history somewhere unencrypted, basically nullifying the benefit of E2E.
Brute-forcing is infeasible. The meta-data is certainly visible however, so intelligence agencies could glean a lot of information about certain users from other sources. You're right that "private conversations" does throw up a red flag.
You answered your own question. No, serious threats do not use Facebook Messenger. The reason it's limited deploy is technical (for now) not caging. Metadata is more important than content. They will always have the metadata.
n excellent point, however I will add to your thinking that we want our government to be able to eavesdrop on conversation provided it's done constitutionally, with probably cause, in an open court and with a warrant.
If we can't get those things then it's pointless to fight back with encryption. We must have our constitutional protections.
I am not usually one for paranoia, but is anyone else becoming more suspicious about Facebooks motivations and involvement with gov? This feature is a massive boost for intelligence services dealing with unsophisticated actors. This reduces the haystack significantly, by users self flagging messages that may be incriminating. Multi-millions of FB messages must be sent every day, brute-forcing encryption on all of these is probably not possible. A small % marked as 'secret conversation'? Much easier.
Why doesn't FB just apply encryption on all messages? Surely they have the resources avail. Is it because this feature makes somebody else's job a lot easier? If my suspicions are correct, what sort of threats would this pick up. Are serious threats likely to use FB messages with 'secret conversation' flagged to co-ordinate actions?