What really impresses me about Viber is the way they went all out and splurged with an honest to god penultimate "e" before the final "r". Most dot-com companies would have settled for "Vibr", but they went the distance and bought an authentic luxurious vowel, precisely where it was called for, without going overboard and throwing in a sometimes-vowel "y" in place of the "i". Very bold and straightforward spelling, I must say. Color me impressed!
No, you misunderstand: I am truly and earnestly impressed by their good spelling, not criticizing any bad grammar. If they'd named it "Vybr," it would have come off like Steve Buscemie holding a skateboard over his shoulder wearing a MUSIC BAND t-shirt, desperately trying to appeal to the youth demographic.
Or ... people name their products/companies based on their ability to secure a proper domain name, rather than being some hipster startup founder trying to appear edgy. Does Viber seem like a name that would be taken?! Listd.com vs listed.com was one I had to deal with some time ago.
End-to-end (E2E) code needs to be open source and venders that don't agree to an audit should be considered insecure; holds true for What's App, which declined to allow their E2E code to be audited.
Also, message metadata is still being leaked by all of these E2E implementations and needs to be fixed.
Would open sourcing the Whatsapp client hurt Whatsapp in any significant way?
I mean, sure, there could be "Whatsapp clones" (aren't there already?!), but wouldn't Whatsapp still benefit from the phone number user base it has, thus maintaining a certain lock-in on its users from which it already benefits?
> I mean, sure, there could be "Whatsapp clones" (aren't there already?!)
Sounds funny if you consider that Whatsapp itself is just a branded deployment of FOSS XMPP server Ejabberd, with feature of federation taken away. Plus a client app implementation, of course.
Come on, it would definitely still be possible with AGPL code. What brought people to whatsapp is its ubiquity and the fact that it just works, and what keeps them from going elsewhere is the inevitable network effect.
Nothing in the AGPL prevents either from occurring.
I think you are misunderstanding jeena. If ejabberd were AGPL, then a fork/a instance of it like Whatsapp would be legally obliged to open up their code as well.
No idea whether the assumption is true that Whatsapp is just a branded deployment.
I still don't understand how opening up the codebase would prevent them from being successful.
As the countless messaging apps out there demonstrate, the value of communication apps lies in the network they build. If they provide a decent UX, and successfully bootstrap a network large enough, it wont matter if their code is open.
I don't think that was an argument against that. At least I agree. The situation is only possible because the AGPL is not used for the free component, but Whatsapp should have no problem opening up their code.
Wireshark,xposed + some creativity and you have your own E2E encrypted solution with OOB keys. For any messenger. It is not done, because there is no demand.
One great benefit to ricochet is that you can choose to exclude those with whom you are communicating from your COMSEC threat model, due to the lack of (mostly connection) metadata availability.
WhatsApp's "E2E" code is from Signal, which is open source.
This zealous belief that all secure cryptography must be open source is something I hear a lot from open source advocates, but not so much from cryptography engineers.
Is the encryption/decryption code in WhatsApp open-source? I haven't been able to find it. My assumption was that they conform to the Signal encryption protocol, which is openly specified, but that the code they use to do so is not open-source.
The value is not just the public display of the source code for auditing, but the validation that what you are running was indeed compiled from that code.
You keep saying this, but how do we know they are actually using the code from Signal without access to source and reproducible builds? Last time you told me it was easy and legal to reverse engineer the code. I then asked you what tools you use to do that and you ignored the question.
To reverse engineer code? For big projects, I used IDA, like everyone else does. For smaller projects, I used Hopper, or, for architectures that IDA and Hopper didn't support, I'd postprocess binutils output.
For an example of a more sophisticated approach, look at:
Remember, in this case, it's especially easy to reverse, because you have the source code; all you're doing is matching the control flow graph to the original source.
Thanks for making this response. Genuinely interested.
[To whomever down voted... I guess I get down voted for not showing gratitude immediately after he posted a reply? This is why I don't post here often. Quite mean people here.]
If you want to look at the Android APK, you can use apktool to get back Java source. Hacked clients like WhatsApp+ even build new derivative products using that output, and spammers do it to get code they can use to inject messages onto the network without having to write their own software.
I think it's great when people further verify WhatsApp's client security, please share your analysis!
Thank you for your tip and contributions to open signal. I appreciate that you want to get encryption into as many hands as possible, even if circumstances may not be ideal.
My primary concern here is a long con. Everything is probably okay now, but after a while, people will stop looking and verifying. With WhatsApp keeping source closed, it makes that period of time shorter, I think. I will try to work your suggestion into my job :) If I'm paid to do it, I can keep doing it indefinitely, even working on tools to automate it.
By that logic the entire application would need to be open source, because nobody would start out by targeting the crypto if they wanted to spy on someone.
Open sourcing the crypto code wouldn't be ridiculous, but pointless.
What I find ridiculous though was
>venders that don't agree to an audit should be considered insecure
The same thing applies to every single part of the application, but not equally. No attacker is going to start out by trying to break the crypto, unless it's obviously broken. "Normal" bugs are far more common and often more dangerous (Thing RCE, or in the case of many modern apps: XSS)
ryanlol is correct though. If you open source just the end-to-end crypto part, it doesn't mean that the there's no backdoor elsewhere - it could easily leak the keys or whole conversations.
The second problem is - you don't know if that source is what ended up in the binary.
So yeah, unless you can compile the whole thing yourself, it should not be considered secure.
What to you are steps towards the mass adoption of universal turnkey state-level OPSEC for everyday use?
___
So, my point that the E2E should be open source is that that code should never be the basis for a business model, so to me, it being open source makes sense. As larger system, that's why I'm saying there needs to be an audit.
Also, you mentioned ATP and I agree, which is why to me it is troubling Signal instead of guarding metadata, actively collects it.
Please let me know I have missed anything you'd like me to address. And I really would be interested in your thought on the question above in as much detail as you're able to share. Thanks!
But you need far more than just the crypto code to create a client, I think open source protocol specs would already achieve this.
And my main complaint was with the claim that open sourcing their crypto code would somehow make a meaningful difference to the security of these applications to the extent where you could consider all applications that haven't done so "insecure".
Without open-sourcing the crypto, they could be just doing rot16($message) for all we know. Open-source is a requirement for being considered secure. It doesn't mean they aren't secure if they aren't open-source, but that you shouldn't consider it so, because you don't know if it is or not.
> Without open-sourcing the crypto, they could be just doing rot16($message) for all we know.
There are two things you can do:
1. Watch the outbound traffic and attempt known-plaintext attacks
2. Reverse engineer the app
Neither is particularly difficult. Most Android apps are trivial to break apart using Lobotomy. A large swath of software security folks specialize in binary auditing.
>The enhanced delete feature, meanwhile, has been in the app for a while, but is part of the company’s is a way for users to wipe a conversation not just on their end, but on that of the recipient’s phone. You can think of this as Viber’s answer to ephemeral messaging, but with a more manual approach.
Does anyone else think this is a violation of users' rights? If I've been sent a message, it shouldn't be possible for the server to delete it. I could screenshot everything, or run a tweak that saves everything.
Imagine if Gmail started allowing senders to remove email they've already sent and has been delivered.
Imagine you're on the run from mafia and communicating with your parents via a messenger. You sent your current location and after a few hours mafia people approached your parents and took their phone. Now, if you sent an message that deletes itself, you're sure that unless your parents didn't preserve it intentionally by copying and pasting or taking screenshot, this location isn't revealed to mafia. If your messenger didn't have this feature, your parents would have manually delete the sensitive message — what if they forgot?
This feature is quite useful, even if not 100% proof.
I think it's reasonable to allow senders to revoke a sent message, particularly if it was sent just a short period of time ago. The only assurance you would have that this works is if both sender/receiver are on the same system. The question is a little trickier if the receiver has read the message. And, I agree, that after some period of time, particularly after the receiver has read the message, the message should be inviolate on the receivers side.
If it hasn't been delivered to the user's device, fine. Once it's been delivered, it shouldn't be modified.
I suppose I might be OK with individual exceptions to this rule, if some sensitive message was sent, after going through customer service. It certainly shouldn't be a one click to delete.
It's the same thing people complain about wrt unfree software, e.g. Kindle deleting purchased books remotely if they were uploaded by someone without the rights to the book.
The difference with snapshot is that their whole gimmick was the deleting thing. You had no expectation of permanency. But I'm sure many use Viber with the expectation that they'll be able to access old communications; after all, it's on their own device.
Sure, they have the right to do it, in the same way that Gmail could implement the same anti feature, in the same way Amazon had the right to delete books. But that just causes me not to use them, just like I'd stop using Gmail if they did that. (I still use a Kindle, but I've only ever purchased one book, and read it right away. I'm not dependent on them not deleting my books.)
If a service offers a feature, it's within customers' rights to use that feature. If Viber is unilaterally deleting messages, that's an issue (as ephemeral messaging doesn't seem to be their primary intent). But if Alice is able to delete her messages to Bob on Bob's device, and both know this (or should know this, they chose the messaging platform), Bob has not had his rights violated. He's complying with Alice's request (automatically).
The comparison to Amazon is not apt. That was Amazon, not the seller of the book, that chose to delete it (in that case because the seller didn't have a right to sell, though given it was 1984 it was rather amusing/ironic). If, for instance, Amazon's digital platform allowed sellers to remove their content from Amazon's listings AND from "buyers'" devices, then 1) no one should be too surprised when it happens, 2) no one should use that platform.
The comparison to Amazon is about the remotely deleting part, not the why part. I have a problem with anything on my device being deleted without my consent.
You used the word "sent" earlier and the phrase "on my device" now. But it seems the app isn't transferring the message to you, only presenting it -- like a website. (Does the message exist outside the app?)
Not being pedantic for the sake of it, just pointing out the language you are using is from a position of ownership -- you receiving and owning a message sent to you, and it is then being deleted -- while the app seems to be retaining all rights with the sender, who merely uses the app to present a message to you, and can revoke viewing privileges at any time.
I think that's irrelevant. It's part of the product, is an intentional feature that has value, and is not even an uncommon feature (exists on Instagram DMs.
> Does anyone else think this is a violation of users' rights?
Not at all, and it's kind of strange to me that you do. It's just... how the app works. Slack lets you do it, for example, and I've used it occasionally. Heck, reddit (and I think HN?) let you delete comments, even if they were previously, uh, transmitted to a reader's browser.
The Gmail example is irrelevant because that's not how email works. If a new company / protocol came about which wanted to try their hand at a different way of sending online letters, that let you delete them later then I'd have no problem with that. I might not use the service, but there's no foul play or anything.
>If a new company / protocol came about which wanted to try their hand at a different way of sending online letters, that let you delete them later then I'd have no problem with that.
I don't think this is a particularly well-advertised feature of Viber.
Neither reddit nor hn are meant to be a means of private comunication. Generally you can't view them offline, and so on. I'm not sure about slack
>that's not how email works
And I think the average user's expectation is that a messaging app will work the same.
By the way, reddit won't let you delete PMs you've sent to another user.
I highly recommend following the updates from Frederic Jacobs on his Twitter. He has already found a number of significant flaws with Viber encryption...
It sounds like there's not yet any published details on what the crypto Viber's using is. Can we withhold judgement until we get that? I'm fine with closed-source, but not with no technical documentation.
Most people who try to implement cryptographically secure messaging get it badly wrong.
More great news from the world of communication, but yet again I'm wondering how we can trust the encryption to really be end-to-end without access to the code. Are the messages still traveling through Vibers servers? Is there any way to know?
"I'm wondering how we can trust the encryption"
You can't.
Not unless the company employees security experts, has a significant bug bounty program (with significant rewards), is open to a degree about their securit architecture, and is popular enough for white hats to actively seek out bugs.
This isn't about trusting that the company isn't try to dupe you. It's about trusting that the company can implement security properly, and that enough "good" people will find security flaws before the "bad" guys do.
As for the good people vs. bad people argument, it should be noted that the good people have a harder job than the bad people. For the bad people to do their job, they only have to find one exploit, whereas the good people have to find most/all of them to have made the system secure. That's why employing people to work on security matters (whether through a bug bounty program or through direct employment), a company that values security shouldn't rely on unpaid volunteers alone.
How much do you think a WhatsApp passive decryption bug would be worth? They don't seem to have a bug bounty, I wonder how much something would go for on the black market.
Viber has become a shadow of its former self. It regularly crashes all phones in my family, you get message notifications way too late, and it takes up a ton of storage on your phone.
I wouldn't touch Viber with a ten foot pole, even if their entire crypto implementation was open source (which it is not). The company was founded by an ex-military (some sources say ex-Mossad) dude. He is also connected to several spyware applications.
“Talmon served for four years in the Israel Defense Forces and held the position of CIO of the central command. He graduated Cum Laude from the Tel-Aviv University with a degree in Computer Science and Management.”
I always enjoyed using Viber, but I never liked that it featured no encryption. It is an alternative for Skype for me, I could video call from my phone, laptop, tablet, etc and do audio calls. Skype became insecure, it leaks your IP address and it's got terrible synchronization issues, and now chatting is really buggy, messages show up in the wrong place for me, as well as the Linux client for it sucks. Viber has a good all around client for every OS I've used it in (including Linux).
This is fantastic news, as others have said, maybe they should really open source the portions of code that have to deal with E2E encryption so it may be audited.
"Along with the encryption, there are some other privacy features getting added into the latest version of the app. Hidden chats will give users the ability to essentially “hide” certain conversations from their usage log, accessible only if you know a specified four-digit PIN"
Can anyone with Viber describe what happens when you get a new message from a hidden chat partner? Does a notification show? how does that notification look?
I was forced to uninstall Viber on account of the SPAM I was getting. I have been trying to figure out how the spammers got my viber contact information.
TLS would normally be used between the client and server. Viber may use TLS for that too, though WhatsApp and Signal use Noise pipes for that purpose. The point of end to end encryption is that they're also encrypting the messages so that they can't be read on the server. Don't know what Viber is using, but moxie posted a link to the details about WhatsApp's implementation. https://www.whatsapp.com/security/WhatsApp-Security-Whitepap...
The problem with Telegram is that it doesn't default to E2E; using it means to go through that additional step of starting a secret chat which I venture a lot of people won't do unless they're very conscious of it.
I keep seeing this accusation get flung around whenever someone mentions telegram but never proof. I remember telegram had a vulnerability once and then it was patched just like any other security software.
You shouldn't throw around accusations without proof.
The word "broken" means susceptible to practical attack, and attacks aren't always of the "cryptanalyze the ciphertext and read the plaintext because you're a clever mathematician" variety.
To add, Telegram's crypto is completely and totally off the walls crazy in terms of design. Add to that the fact that there are cryptographic breaks (though not we can read your ciphertext breaks), and you should be careful.
iMessage would have been reasonably secure had they used AEC-GCM or a MAC. The design at least made sense: compose a scheme out of known primitives. They just missed (very important) details. Telegram is just turtles all the way down.
Signal is a niche service. Its selling point is security. Viper and WhatsApp are general services and their selling point isn't security.
I'm not trying to imply that WhatsApp are the first secure chat, but I am saying it is the first major platform to switch to end-to-end security and this looks like the first step in a trend of switching.
not with that little info from Viber. According to Frederic Jacobs (one of the OpenWhisperSystems devs) they are relying on md5 for cryptographic hashing :(