Hacker News new | past | comments | ask | show | jobs | submit login
New "Surveillance-Proof" App To Secure Communications Has Governments Nervous (slate.com)
127 points by kunle on Oct 18, 2012 | hide | past | favorite | 83 comments



Janke assembled what he calls an “all-star team”: Phil Zimmerman, a recent inductee to the Internet’s Hall of Fame, who in 1991 invented PGP encryption, still considered the standard for email security. Jon Callas, the man behind Apple’s whole-disk encryption (which is used to secure hard drives in Macs across the world), became Silent Circle’s chief technology officer.

Yeah... that might actually qualify as "all-star"


There are actually a few more developers at Silent Circle who are just as good as those two, so it really is an amazingly strong team.


I'm not familiar with the specifics, why do you think they are not real all-star?


I think he is being serious, not facetious...


Quotes confused me a little.


And the ellipsis.


I've always had this sneaking suspicion that Microsoft bought Skype solely under the direction of the government so that it could be centrally administered and monitored by a vendor that is willing to do the job. There certainly has never been a business case for it that would support the obscene valuation even the first time around. I usually see about 20m users logged in and most of them are probably not active let alone terminating calls to a telco so they represent zero revenue.


Along the same lines, I've always had the thought that the reason IPv6 adoption is taking so long is because encryption is actually built into the protocol.

What happens to all those billion dollar wiretapping facilities all over the country once they find out they can't just siphon internet traffic into their facilities anymore?


"Along the same lines, I've always had the thought that the reason IPv6 adoption is taking so long is because encryption is actually built into the protocol"

Are you talking about IPSec? This is still optional in IPv6 and available using IPv4.

I think the reason adoption is slow is that it's not easy to convince managers to allocate people and time to it... "So, it's not yet broken but you want to fix it?"


Might be why Microsoft was willing to pay twice as much as Google for it, too ($8 billion), for something that is hard to imagine they would get their money back in the next 2 decades or longer. So maybe they have a different business model we are not supposed to know about.


It's good to hear they will be releasing the source! A tool like this would be rather insane to use by anyone serious about security without the ability to inspect the source. The article states it will be under a "noncommercial open-source license".


I'm interested if they will release enough to reproduce the entire system, or only the client aspect. It's apparently using client-client protocols, but there is obviously some level of discoverability provided by the network that is a monetisation avenue for the company.


Can't believe people are getting so jazzed over proprietary encryption technology! It's a truly horrible idea, especially as there are already existing F/OSS alternatives, such as https://chatsecure.org/ for iPhone.


You overestimate the importance of source code availability for security.

Counter-example: http://digitaloffense.net/tools/debian-openssl/

What matters is who writes the software, how experienced they are and what the validation process is.


The Debian OpenSSL example is a really bad example and does not prove or disprove anything. It certainly does not prove that secret security is somehow equal or better than open.

I can give any number of examples of bad medicine, which even if their ingredients are well publicized, documented and reviewed, they still performed badly. In what way would that prove that secret sauce medicine is equal or even more safe to use? I would never trust any medicine claiming secret sauce.

Any security can fail, but if you can't review it, the security is never trustworthy. Never. Not once! You might trust the company/producers of it, but the same goes for medicine. I might trust a doctor to give me a pill without telling me what's in it, but somehow I would not trust the same doctor if he refused to tell me what's in it.

Medicine requires a very high trust level because they can cause bodily harm. We don't trust secret sauce medicine. If you need to put the same, or even higher trust in a piece of software, why would you trust secret sauce software?

Sure, that oil which was created from snakes might cure cancer, fix your infected wound, and solve any other ills you got. It might also do nothing and thus you die.

Sure, that software might protect you from an oppressive state, hide you from mobsters with hitmen, or protect the witness. It might also do nothing and thus you die.

Sure, you personally might not be able to review medicine or software, but knowing that someone can review it will make you feel safer. It also helps to know that as soon someone does find a bug in medicine or open software, everyone can identify what things are affected by it. With secret sauce, who knows whatever else might include a copy of it if they refuse to disclose it.


Without public availability of the source code and an auditable trail from source to build, there is really no way to trust it.

Also, this: it reserves the right to shut off that person’s service and will do so "in seven seconds."


Without public availability of the source code and an auditable trail from source to build, there is really no way to trust it.

This is where I feel you might be a little bit dogmatic.

You don't need the source code to audit software. Software that are heavily used are audited.

Microsoft software is probably more scrutinized that any other open source one (I'm not implying it's more secure, just that it's more analyzed).

Security is a question of trust, not a question of source code. Even if you do the audit yourself, it's a question of trust: it means you trust your own abilities to evaluate the security.

I'll go a little bit further.

Did you check that the computer you bought isn't rigged? Maybe someone can remotely control your webcam or eavesdrop your keyboard.

Did you check that the operating system you have hasn't been compromised? Maybe someone intercepted your download and patched it on the fly to insert a backdoor.

Is your home physically secure? Maybe someone is copying your hard disk every day.

You're right when you say Silent Circle should be scrutinized and criticized.

Nevertheless, I disagree when you imply that the unavailability of its source code is a show stopper. Source code only makes one small part of the security audit a little bit easier.

Security is a process, not a feature.


BS! Crypto software has to be open source to be taken seriously - all the other things you write about are additional factors that count in and are not related to this one argument, so you are trying to wishiwashi the discussion - it only shows that you think your readers are not able to think clearly and in a well-structured way or you are not able to do it.

Without sourcecode no crypto routines can be trusted - period. Anything else might work in the fake industries, where producing marketing lies is part of a standardized way to make money, but not in the real crypto world.


Your opinion only makes sense if you think P(your analysis of the source code is correct) > P(you can trust person X) * P(person X's analysis of the source code is correct).

Actually the right hand side is much more difficult to defeat because it involves more than one person.


It is impossible to achieve trust in a tool, if process and function is kept intentionally hidden away, and your life is in the balance.

Knowing that medicine is openly reviewable create a trust level that secret sauce never can achieve.

Knowing that airplane/train/building architect plans are openly accessible creates a trust level that secret sauce never can achieve.

scrutinized security can sometimes help, but, again, would you trust secret sauce medicine just because 100 000 other people has done so and to your knowledge, no one died?

Unavailability of source code is a show stopper if you need to bet your life on the chance that it will perform correctly. Everything else is blind faith, and while its true that some people will accept blind faith over "real trustworthyness", those same people also form sects who refuse medicine and trust that a miracle will magically wand cancer away.

Security might be a process, but it require an process that gives the person on the receiving end a mean to assess trust. Secret sauce is inherit impossible to do so. Historical information (like the windows example) helps, but in the end, it is just a black box with oil in it that says "made from snakes - cures everything". So far, it has in some cases worked, and in other not, and several times the sauce has been announced as "improved" with new versions. Still, would you prefer to bet your life on it, or on a open disclosed medicine which might actually have been reviewed by a third-party? Which one is more trustworthy?


Microsoft actually operate a shared-source scheme.

So, yes, Windows does have its source code audited by customers who are willing to pay the price.

EDIT: Grammar


Are those customers free to report security bugs or are they forbidden by EULA and NDA's?


I don't know the details, but I imagine they will need to be private reports.


The idea is about taking it to "trusted-by-me", rather than relying on "industry-trusted" or "trusted-by-someone-else" sources for analysis and verification.

That doesn't mean that the "trusted-by-me" source has the means or the ability to give it the OK, but more that I don't have to trust you at all.

Security can only be guaranteed by the people you trust, rather than the people everyone else trusts.


How may similar problems in proprietary systems were silently fixed? We'll never know...


If people are interested in this type of thing for Android, I recommend checking out the RedPhone and TextSecure apps.

They're free to use, all the source code is GPLv3 on GitHub, and RedPhone already has global calling coverage. The apps have been translated into 15 languages, and in my experience they're really dead simple to use.


I really hope that TextSecure will be ported to iOS.


Stop hoping and port it, then. The protocol[0] is based on OTR, which already has at least one Cocoa implementation in Adium. AFAIK you can't directly tap into SMS on iOS, so you'll either need to do a lot of copy/pasting or run across your own messaging network.

[0] https://github.com/WhisperSystems/TextSecure/wiki/Protocol


I'm pretty sure this kind of thing doesn't make governments nervous. For one thing, they can always place a virus or a bug on your phone. Or they can do it the old fashioned way and bug your car/office/bedroom.

This will only going defeat the "dragnet" type stuff that combs through millions of conversations looking for keywords. I guess that's something, but if you've managed to attract the all-seeing eye you shouldn't be lulled into a false sense of security because the link between your mobile and someone else's is secure.


Yes, but the "dragnet" type stuff is the more important one. If you "attract the all-seeing eye" to the point of getting personal attention (being hacked or physically bugged), it means that government already has more than enough reasons to get rid of you and nothing can keep you safe from an AGM-114 flying through the window and/or enough red-tape being unleashed on you to make you die in misery. They probably just need some information from you to go after someone else.

On the other hand, mass-scale surveillance enables you to actually find targets of interest out of sea of people irrelevant to the case. Because the usual problem is, you know there are Bad Guys[0] out there, but you don't know where. Mass-surveillance makes the search much, much less expensive. So they really care about it.

And there's also an issue of Big Data, aka. whatever data they collect on people can in the future be used in a ways we can't even imagine now but won't like when it happens. So the data itself is also a danger. Also humans make mistakes, and algorithms are not always good, and no one would like to have their home raided because some long-forgotten bayesian filter buried deep in the system didn't use logarithms properly and introduced serious errors to probability computations. That's why people care.

[0] - relative to what govt. sees as bad at the moment.


There are big concerns with mobile phones, especially with the radio baseband. I'm reluctant to consider phones genuinely 'secure' until the radio doesn't have access to the user OS or non-radio related hardware, and the microphone and sensitive storage is physically separated from the phone.


> if you've managed to attract the all-seeing eye [...]

But I guess that governments expect large-scale surveillance to be the primary way to attract the all-seeing eye.

National security was done a certain way during the cold war, when facing highly organized intelligence services. The swarms of nutjobs, loosely herd by amorphous terrorist organizations, are much harder to detect; many people probably have high hopes about big data and statistical methods to find them.


They can also just legislate you out of existence: "You want to operate here? You comply with the law. And that means a backdoor."

As long as this service is commercial (which apparently it is) it will have a payment gateway, and that point can be easily blocked by a local government.


So everybody but the United States will have secure communications. Race to the bottom, then? The place seems more Soviet every day.


The US has a lot more influence in these kinds of things than you would think. I'm continually amazed at how easily countries buckle when faced with, for example, financial industry sanctions. Look at the way copyright law has changed over the years.


Ouch - point taken.


My main question is how are these keys generated and exchanged? Normal diffie-hellman is susceptible to man-in-the-middle attacks. You can eliminate this by adding public key certificates to the mix, but how would Silent Circle manage these certificates? How easy would it be to forge an encrypted text from an account? Essentially, how does the app verify that the key it is given is legit? So many questions and so little detail.


For the voice app, it uses ZRTP. Basically, the initiator and responder perform an ephemeral DH key agreement. Both clients then independently generate a "short authentication string" (basically just two English words) from the shared secret they negotiated, and display those two words to the caller.

Both callers then read the two words to each-other, and if they're the same, they know there couldn't have been a MITM attack. In the case where there's a MITM attack, each caller would have different key material, resulting in a different SAS. The protocol uses hash commitment and other tricks to make this really work in practice.

They haven't published the protocol for their chat app's encryption yet, but it sounds similar to OTR. While OTR has some nice tricks for verifying authenticity by using zero-knowledge proofs, it doesn't sound as if they have support for that sort of thing, and parties would have to make a call and read a SAS to each-other over the phone.


So voice is an iteration on Phil's Zfone product he was pushing a few years back...


Yes, although they seem to have much better marketing this time around (a good thing).


They're also doing some iOS/mobile specific tricks with Apple Push to do the key exchange (at least in the text app)


Actually, I found that they use ZRTP from their website. Phil Zimmerman wrote it and he is one of the SC founders. Wikipedia gives a nice overview: http://en.wikipedia.org/wiki/ZRTP


PAKE solves the MITM issue of Diffie Hellman, so if they went the DH route, they probably used something along those lines.


The problem with crypto is not source code. The problem with crypto is not the protocols or the algorithms. Partly it is infrastructure. Mostly it is network effect. Where the cypherpunks have failed (multiple times) is getting strong crypto to be the "default option" for a large enough proportion of users that it spreads virally and takes over the world.

This is one more shot at it.


It's very hard to convince ordinary people that encryption benefits them. Encryption needs to be unbelievably easy to use in order to make the average joe-on-the-street use it. I've been involved with such efforts- the cold truth is "no one cares" (except people who REALLY care). So what usually happens is very few people set it up, and the ones who do will struggle through to make it work, because they care.


I love the fact that he's an ex-SEAL. Dragging him into a little room at the airport to intimidate him might not work so well here.


It seems to me the way to get around this (for the nervous government) is to attack the iphone itself and capture keystrokes, nay?


> If authorities wanted to intercept the communications of a person using Silent Circle, it is likely they’d have to resort to deploying Trojan-style tools—infecting targeted devices with spyware to covertly record communications before they become encrypted.


The problem is, anyone who is serious about security is going to need to read the source and compile the binary they load onto their phone. Nontechnical people aren't going to be able to do that.


A tech guy can ask a non-tech guy if he can examine his phone. If the binary on the phone differs from any binary the tech guy can get by compiling clean source code, he can raise questions. This is how a group of tech guys who are in communication with each other can protect the privacy of a much larger population of non-tech guys.


Right; if you don't understand the technology (or like most of us, haven't taken the time to comprehensively review it) you have to rely on social proof.


Thinking about this really hard, I'm not sure how nontechnical people can ever be sure that they can trust technical artifacts?


Well with those two sentences you've pretty much ensured that this is never, ever possible. You'd have to have a trust system from the silicon up, signed bootloader, hypervisor, no exploits, etc to guarantee that the system you're booting is executing the binaries you think it is.

At some point "good enough" has to be enough. Okay, you built the APK, how do you know I don't control the VM and just swap out that JIT'd method with one that I've altered? (Repeat this until you get all the way back to the initial power-up)


Ah, good old "Reflections on Trusting Trust".


Really wish I could sign up and then hold my free month until after the Android app comes out. Guess I'll just keep checking back (or assume that when it happens, it will be mentioned here)


You might check out RedPhone and TextSecure (I'm one of the developers). They're in the Play Store and on GitHub.


How do you get a free month? This looks like it'd be fun to play around with.

Edit: Somehow didn't see the giant "subscribe now and receive a free 30-day subscription" on the homepage.



Wow I would love to learn the details of this technology.


The secure voice app is ZRTP (they are just using an existing open source ZRTP library). The ZRTP protocol is now an RFC: http://tools.ietf.org/html/rfc6189

The secure chat protocol they're using is something they developed and haven't yet published. It sounds as if it's similar to OTR, though. TextSecure (an encrypted SMS app I work on) uses a version of OTR adapted to the mobile environment, which is documented here:

https://github.com/WhisperSystems/TextSecure/wiki/Protocol


It sounds like a fairly standard shared key encryption. Basically, each user has a unique private key and all users share a common public key. When the data is encrypted, the combination of the private and public keys creates a number which can be determined only by someone also having valid public and private keys (or an immense amount of time).

Check out this video which attempts to explain this type of encryption: http://www.wimp.com/howencryption/.


Open source alternative: Cjdns. http://en.wikipedia.org/wiki/Cjdns


This has been announced as open source as well, though it hasn't been released yet.


If these guys want it to go more mainstream they should let people purchase PRE-PAID cards. I love the idea of easy encryption but hate the SaaS. (That's fine for webservices but this is more of a phone-service. 20 bucks a/month is too much for most people).


They're selling prepaid "Ronin" cards. Or you can fund your account with a prepaid Visa card (hint: Pay in cash at a supermarket you don't normally frequent)

The SaaS part of it is pretty necessary. In order for your phone/pad to traverse a NAT firewall there needs to be a common device that acts as a matchmaker between callers. This also solves the problem of "What if the person I'm calling is at some random coffee shop, and not at home?", where they can't set up a firewall rule.

But really, their initial audience is business people who won't blink an eye at $20+ a month to secure their communications for a multi-million dollar deal. Not cheapskates like me. :)


Thanks Chip.


Is Silent Phone just zfone 2.0?


So this is like Tor but for mobile phones? Very interesting.


Tor is like Tor for mobile phones (checkout Orbot). =)

These applications have more emphasis on confidentiality than on anonymity.

The type of "onion routing" that Tor employs for anonymity is very difficult/impossible to pull off for applications which require extremely low latency connections, such as a realtime voice application.


Not really. This still has a central server.


Moxie's point is important, but one that is more important is the difference between anonymization and encryption. If your connection is not encrypted, it may be possible to determine who you are, but something being encrypted doesn't by nature mean that the users are unknown.

In this case, they know each other and communicate encrypted. In Tor's onion routing, the server and user don't "know" each other and they communicate via an encrypted channel. Thus, if someone listens to your tor connection, they still can't see your data [and/or guess at your identity].


I don't think it's worth emphasizing the confidentiality aspects of Tor's "link layer" protocol, given that users really shouldn't conceptualize Tor as something which provides confidentiality for their traffic.

Or as the Tor project says, plaintext over Tor is still plaintext: https://blog.torproject.org/blog/plaintext-over-tor-still-pl...


$20 is more than what I pay for my whole phone & data plan per month, that is much too expensive for me and my peers.


Every politician in any second or third world country would pay ten times as much for such a service, not counting people doing illegal things or things not approved by the ruling party / dictator etc.

Having it as nice to have in a free country is one thing, having your life or career depend on eavesdropping free communication is another.


I would much rather have John Doe be safe from government surveillance than the government itself.


nice job, but it is not available in Russia. Maybe other itunes regions. What good does it do when it is not available?


> It will store only the email address, 10-digit Silent Circle phone number, username, and password of each customer.

Surely this is not correct.


I'm the guy who wrote the accounts management platform for Silent Circle. Rest assured, passwords are stored as PBKDF2 hashes. (I realize bcrypt is popular around here, but when it comes to crypto stuff, standards are a good thing, most of the time...)

edit: s/DK/KD/


How much computing power would it take to brute-force one of those hashes?

We know NSA has gargantuan parallel processing capabilities.


We tune the hash iterations to take a reasonably long amount of time on our modern hardware. That said, a dedicated and well funded attack on a single hash could certainly crack it in a relatively short period of time (which is why we protect the hashes as if they were cleartext passwords...)


I think it's far more likely they'd find a way to break the hash function and not tell anyone about it.


I hope this is just a miscommunication about what is actually stored (salted hash instead of plaintext).


I'm sure that it will be encrypted.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: