Hacker News new | past | comments | ask | show | jobs | submit login
Google encrypts data amid backlash against NSA spying (washingtonpost.com)
190 points by esgoto on Sept 6, 2013 | hide | past | favorite | 147 comments



A salient bit:

  [Eric] Grosse echoed comments from other Google officials, saying that the company
  resists government surveillance and has never weakened its encryption systems to
  make snooping easier — as some companies reportedly have, according to the Snowden
  documents detailed by the Times and the Guardian on Thursday.

  “This is a just a point of personal honor,” Grosse said. “It will not happen here.”
Some folks are inclined to distrust Google, but there are people here who really, really care about security.


The revelations that NSA is running a HUMINT program should make it very clear that you can't trust everyone at Google or any other major provider. Those risks are mitigable, but it's expensive and I doubt most places take sufficient steps to prevent it.

Even without that, trusting companies because their employees are honest is hard.

There are some people at the NSA who really really care about privacy and not spying on US Citizens and believed we didn't do so. In fact, most of the ones I've met. However, with sufficient compartmentalization, they don't know what they or others are truly doing. Same can be true for any company.

Are you working on Google's data liberation system to not trap users in your system or are you working on NSA's data exfiltration system for Google's data. I's not always clear.


Google (and other organizations) collecting and securing such vast troves of information -- and building the technology to analyze it quickly -- obviously makes them hugely valuable to attackers and defenders alike, since the data they are storing is the very information that attackers/defenders try to keep from each other.

Encrypting it and securing it very well at a technology level means that the human element (I'd argue) becomes the easiest way to get access to it - i.e. someone with sysadmin access, DB access, or just working on a project where the APIs and/or tools available can produce valuable information. This is true even if the 'player' (with system access) has to be 'recruited' by the attacking or defending team some time after taking up the job.

Couple this with the fact that even the security agencies themselves are prone to corruption, malfeasance, human error, (no-one is perfect), and insiders, and you could easily end up with a confusing mess. Bear in mind that everyone wants their agents to operate and be able to communicate back without detection, again regardless of which team.

Compartmentalization must also come into conflict with inter-agency sharing rules -- at some level, people need to know what is going on and make decisions -- and trust must be a big issue for many of these groups - they probably spend a ton of time watching themselves and others, and watching for information leaks / canaries / spread of misinformation.

I'm certain there'll be some fascinating stories eventually from all of this - it all continues to make me believe that concentration of power and information (which I think are continuing as a trend) only end up in creating dangerous situations, and that decentralization is ultimately the preferable way to go (in that it prevents a small number of people from having too much power/influence/control, and equally protects those same people from being targets themselves).


I'm not aware of much successful recruiting. Most moles turn on their own. The game for the intel guys is like baseball: a lot of waiting and then serious hustle to make sure a fresh mole gets trained, vetted, rendered effective without getting caught.


Depressingly makes it sound like an everyday thing which is just monitored for - makes sense I suppose given how many information sinks there are nowadays.


True, yet I imagine it shouldn't be difficult to signal out those employees that present the greatest HUMINT risk and apply extra scrutiny. Any employee that have any sort of top secret clearance, that has worked for intelligence agencies or contractors and the worked in the military, but not out in the field is potentially a mole.

I'd find it hard to believe that there are people that don't fit that profile but are moles for governmental intelligence agencies even exist.


People come to Google from all paths of life. For all you know, some 20-something long-haired unix hotshot could have been busted for drugs at some point and "repurposed" as a mole in exchange for leniency. And there's always the classic sex honeytrap for married men, which will never go out of fashion.

Real spooks don't carry a conscience, they'll exploit anything they can to get their grubby hands on the data they need.


    "For all you know, some 20-something long-haired unix 
    hotshot could have been busted for drugs at some point and  
    "repurposed" as a mole in exchange for leniency."
Excellent point. Previous comment retracted.


That doesn't mean this development is meaningless and should be dismissed, it mitigates real concerns. As for the human element, that is probably the hardest to defend against but enacting sound engineering solutions is more than half the battle.


Since Google is able (and willing, when asked by the government) to decrypt everybody's email at will, and continues to build software that maintains their absolute power to do this, I really don't give a f@#k whether they promise to use 256 bit encryption, 512 bit encryption or 23439287239 bit encryption.


There's still a gigantic difference between the government being able to "vacuum up" everything (weak/no encryption) from everyone, versus the government having to ask for communications from specific users.


Yes.

If you are actually trying to hide something from a targeted government attack, you certainly don't want to use any hosted services like Google's.

If, however, you are merely trying to avoid the government passively sweeping up all of your data, searching through it, and maybe subjecting it to further scrutiny due to it containing the wrong keyword, it helps to know that it's encrypted in transit, and that in order to decrypt it, someone has to actually present a warrant to Google.

Of course, there's the additional problem of National Security Letters, as they aren't really real warrants and they have the secrecy around them.

These problems can be attacked on multiple fronts. We can improve cryptographic security, and work on more decentralized approaches to online services, and reign in the NSA's power at a legal level, and so on.


Yes, but (as others have already pointed out) the takeaway point from this press release shouldn't be "Google is doing great things to prevent spying" and should instead be "Google admits they have been sending sensitive customer data between data centers in plaintext."


When email passes between providers, 90% of the time it's plaintext.

How many here encrypt replication data between data centers?


For sure, but in the case of google this probably doesn't apply.

From what was published recently we know NSA has proven methods for bypassing encryption, namely getting the keys used for encryption (so they can decrypt everything) or getting access to the content before encryption or after decryption.

To me this last move by google is a PR attempt at regaining people's trust


I'm so bored of hearing the accusations of PR stunts.

They crop up in every submission detailing an action taken by Google with regards to the Snowden/Prism/NSA revelations. Is it so ridiculous that a large corporation should seek to ameliorate its image in the eyes of users and shareholders?

PR has become such a dirty word.

Of course it would be best if all these actions were taken earlier, purely as the result of a strongly held principle. However, when presented with the realities of public businesses operating on a global scale - I am glad that such steps as those detailed above are taken: at whatever stage, and for whatever reason.

The tinfoil hat brigade needs to, as the old saying goes, "stop seeing reds under the beds" and occasionally ... just occasionally ... take the facts presented to them.

In times when misinformation and confusion is so wont to proliferate, attempting to discern true motive is almost ridiculous - condemnation on the basis of any such discernment doubly so.


When Google does something that makes it impossible for them to hand over certain types of data to the NSA, either by not collecting it, or making it so that only the user is able to decrypt it, wake me up. Until then, it's a PR stunt.


I am not disputing the fact that a major motivation for their actions is PR. I am suggesting that action as a result of PR pressure is still action - vastly preferable to meek acceptance of the status quo.

That being so - dismissing something as "just PR" misrepresents the actual benefits something like this may confer.


IMAP/POP3 has always been a gmail option, which allows local PGP use. Chrome sync allows you to set your own encryption passphrase (provided you trust the binary doing the encrypting...). You've been able to share encrypted files on google docs/drive since they added arbitrary file storage. Etc.

Chrome sync is probably the strongest example that I can think of fitting your criteria, since it's built into the product itself, but a lot of this just comes with the territory of web-based apps.


They haven't done anything there though... They've just provided a standard IMAP service, and a standard file syncing service...

When they provide an option in GMail for people to upload their public PGP keys, and then start encrypting email on the way in, and don't store any non-encrypted versions of those emails, and build PGP support into Chromium for accessing those emails. Then they will have done something worth noticing.


How would spam filtering or searching work in such a service?


Spam filtering:

  Step 1. Spam filter
  Step 2. Encrypt
Searching:

Client side tool which builds a local index as messages are decrypted to be read for the first time. The index is it's self encrypted and incrementally synced between clients.

That took me less than 5 seconds to think up. Google can spend time and money thinking up better solutions if they want to actually do something.


If there's any sort of processing on incoming data, then there's going to be a lot of unencrypted copies floating around in various caches and intermediate staging systems. A secure system requires encrypting the data right off the wire, before it's stored anywhere.

Search indexes are very large -- you don't want to double or triple the amount of storage your email client uses. Also, being able to search only mail that you've downloaded and decrypted is a terrible user experience. I'd estimate over 60% of the mail to my personal inbox is from some automated system, rather than directly from a human, and I typically don't look at them unless a search hits them.

It takes 5 seconds to think of solutions with terrible security and usability characteristics. Thinking of a system that will be a measurable improvement in security and will actually be used by people is much more difficult.


These are all easily solvable issues. But to get back to the point of this thread: Google has done nothing to help secure peoples email.

The fact that you can't identify any ways in which they could, or refuse to acknowledge them, or think they're too difficult for a multi-billion dollar company makes no difference to the point under discussion.


You can split up the USA into a few queries and then just filter for a specific email. Semantics.


I think it is fair to assume that the guys who set up and run Google, Apple, MS, Yahoo, Facebook, etc, all started with great honorable intentions. Yes, even the hated Bill Gates. I chose to believe that is true. I believe these people were once us. Up till recently, its been a cat and mouse game of how they can get money from customers and how customers can mitigate that. This to me is fine, it is business and they all need financial structures to survive.

Of course what has happened now is that the jack boot of government has poisoned the well, and I cant believe there is no group of people more upset and angry than these pioneers. I bet if we could talk to any of them off record they would be as annoyed as "we" are, if not more. After all, its their baby being ruined, not ours.

I would add to that the corporate high finance thing as a poison too, but again, that's just money. It does soil the, er, purity of things, but doesn't not threaten freedom and liberty.


Am I the only one who noticed that Apple didn't appear on the PRISM timeline list until after Steve Jobs died?


has never weakened its encryption systems to make snooping easier

Up until today, Google didn't even encrypt the data. So it's kind of hard to weaken something you weren't even using.

And then to go on to equate it as a "personal honor".. you've got to be kidding me.


> Up until today, Google didn't even encrypt the data.

That's not what the article says. The new encryption is specifically for backend datacenter-to-datacenter traffic over leased lines. But even before that project, there was lots of strong encryption being used all over Google: to encrypt user data on servers' hard drives, to encrypt data going between browsers and servers, encrypting tape backups before sending them to offsite storage facilities...


Your critic is valid and I completely agree... but I standby what I wrote with regard to their traffic over leased lines.

'We just sent data in the clear over leased lines so the NSA could read whatever they wanted. But the encryption we never used was never weakened.'

This is nonsense.

Not only that, but when the data is transmitted, that is exactly when Google has the least amount of control over it... ie: that's when encryption is the most important. Yet, they chose not to encrypt the data, and then give everyone a story about their 'personal honor' of keeping things secure. This is a joke.


Depends on your definition of "leased lines". A privately-operated layer 1 backbone over dedicated dark fiber has traditionally been considered to be pretty secure (up until recently, anyway)


I was always curious how HUMINT would look if you were inside the organization as a worker-bee.

In my experience at a certain large SW company in the pacific northwest, I do know that core crypto code, the actual workhorse functionality, is typically walled off from the general developer population. The rationale given is that there are foreign nationals on staff who are not permitted to look at that stuff. That makes sense given the export laws in place.

All the security-like code I saw above that layer was good, to my non-security-trained eyes: Honest use of crypto algorithms, responsible bug fixes and regular and nitpicky reviews of protocols, file formats, APIs, and the code itself. For several shipping products I had confidence that the code we checked in was the actual code that shipped.

For the lower layers (an ideal place to introduce weaknesses):

- The general developer population never sees them

- Even if the sources are utterly honest, the build process might hide the introduction of weaknesses (a variant on "Reflections on Trusting Trust"), or the build machines might ship different bits, or weaknesses might be patched-in later (even after customers get machines) by the OS update infrastructure.

This is the kind of thing I'd HUMINT if I had a mind to.


But do all of them? In my personal experience, such tasks will be given to employees who are likely to perform them.

For example, at a past sysadmin job, I was asked about the technical feasibility of monitoring a certain employees computer use, whom management suspected of some minor infringement. I refused to assist in the matter on moral grounds and was reprimanded. The task was given to a colleague of mine who had no qualms about it. Next time, they went straight to him.

And the more complex, distributed and large a system is, the more people are in positions where they can compromise it. It takes only one person to break the whole system (which is basically what just happened to the NSA). Do you trust everyone who has or can gain access to your SSL private key? Everyone who manages your network?


Or so they say.

I'm not convinced that this is not Google's version of "trust us". Keep in mind there is no PR loss for Google to adopt a pro-encryption stance now. If they are really serious about this, they would a) stop trawling emails and b) help develop tech for seamlessly encrypting both in-flight and at-rest email.


Meanwhile, Google Argues for Right to Continue Scanning Gmail

"This company reads, on a daily basis, every email that's submitted, and when I say read, I mean looking at every word to determine meaning," said Texas attorney Sean Rommel, who is co-counsel suing Google.

http://abcnews.go.com/Technology/wireStory/google-argues-con...

http://www.mercurynews.com/business/ci_24021944/google-argue...


Why are you posting this shit on HN? Everyone here knows how contextual advertising in gmail works, and excepting those too young to have been aware back then, have known about it since 2004. If you have a point, make it, but scare quotes specifically made to induce an emotional reaction from those without technical knowledge really have no place here.



I'm not sure what your point is. If you don't want your email accessed, don't send it unencrypted from your client, and definitely don't send it via a service that has features (search, spam, google now, etc) and is payed for by a system (contextual advertising) that explicitly accesses the contents of your email.

Download Thunderbird and a PGP client[1]. Boom, done.

Use another email service. Boom, done.

I'm not objecting to the idea that you'd find it objectionable to have your email contents used for advertising. I'm objecting to a useless quote that tries to turn this into a soundbite-off instead of an actual discussion (little hope as this thread has).

[1] https://support.mozillamessaging.com/en-US/kb/digitally-sign...


Diluting the expectation of privacy about e-mail in general has implications under constitutional law. The 4th amendment only protects against unreasonable searches by the governmnet. The use of mail connotes private communication. As in contrast to post (which is presumed public). Implied consent to forfeighting the right to stop 3rd parties reading your e-mail is not something the public has an interest in establishing. Regardless of the purpose of such 3 party incursion. I don't want to get into a side-bar explanation or legal debate TBH, but its not mindless fear mongering.[1]

[1] http://www.nytimes.com/2013/09/07/us/politics/legislation-se...

“This has been the stuff of wild-eyed accusations for years. A lot of people are heartbroken to find out it’s not just wild-eyed accusations.”


No, this is not correct. The particulars of your agreement with a third-party for storage of your email does not extend government rights to examine that data (the ads in your inbox are as non-public as the email in there too). Even the horribly flawed ECPA recognizes that (it buttresses it, in fact). Moreover, Google[1] is currently standing behind the US v Warshak shield and requiring warrants for email contents.

The problems with the third-party doctrine are much more fundamental than the ways in which that third-party is storing and displaying your data, activities that continue for any webmail client even in the absence of ads when doing spam filtering, searching, etc. Merely the fact that a third-party is involved at all is enough for the outdated sections of the ECPA to rear their ugly heads. Here's hoping the Supreme Court takes up a case like US v Warshak soon.

[1] http://arstechnica.com/tech-policy/2013/01/google-stands-up-...


No, this is not correct.

Your talking statutes, not the constitution. Obviously the constitution trumps both statute and executive readings. Reasonable is per the constitution, an it is plastic in case law. That's why the questions are important, fundamentally. In any event, its worth keeping in mind the right level of abstraction.


> Your talking statutes, not the constitution

I'm talking both. The ECPA was important in that Congress avoided decades of court cases by making explicit the protections afforded electronically stored media, though they did not extend those protections far enough (which today in practice weakens protections that may have been more clearly delineated by now had the ECPA not been enacted).

Constitutional protection superseding (among other things) the fairly arbitrary 180 day requirement for a warrant set by the ECPA was clearly recognized by the Sixth Circuit in the US v Warshak second (criminal) case, stating that "The government may not compel a commercial ISP to turn over the contents of a subscriber’s emails without first obtaining a warrant based on probable cause."[1]

In both US v Warshak cases, though, the Sixth Circuit emphasized the higher protection afforded content over transactional data just for being content by the the tests established by both Katz v US and Smith v Maryland. They laid out that even the supremely terrible precedent of Smith v Maryland (which is the proud parent of allowing the government to seize "metadata" without a warrant) did not allow the government to "bootstrap" limited access to full access, including the access needed for automated processing of email contents by the email provider:

"The government also insists that ISPs regularly screen users’ e-mails for viruses, spam, and child pornography. Even assuming that this is true, however, such a process does not waive an expectation of privacy in the content of e-mails sent through the ISP, for the same reasons that the terms of service are insufficient to waive privacy expectations. The government states that ISPs “are developing technology that will enable them to scan user images” for child pornography and viruses. The government’s statement that this process involves “technology,” rather than manual, human review, suggests that it involves a computer searching for particular terms, types of images, or similar indicia of wrongdoing that would not disclose the content of the e-mail to any person at the ISP or elsewhere, aside from the recipient. But the reasonable expectation of privacy of an e-mail user goes to the content of the e-mail message. The fact that a computer scans millions of e-mails for signs of pornography or a virus does not invade an individual’s content-based privacy interest in the e-mails and has little bearing on his expectation of privacy in the content. In fact, these screening processes are analogous to the post office screening packages for evidence of drugs or explosives, which does not expose the content of written documents enclosed in the packages. The fact that such screening occurs as a general matter does not diminish the well-established reasonable expectation of privacy that users of the mail maintain in the packages they send."[2]

I have not personally seen a good argument for differentiating between spam filtering and contextual advertising in terms of access. Regardless, this is a clear argument for automated access being immaterial to the question of an expectation of privacy of the contents of an email.

[1] http://www.ca6.uscourts.gov/opinions.pdf/10a0377p-06.pdf

[2] http://www.ca6.uscourts.gov/opinions.pdf/07a0225p-06.pdf


I have not personally seen a good argument for differentiating between spam filtering and contextual advertising in terms of access.

Are you seriously proposing free e-mail and/or a spam filter is a good trade for one of the major pillar Bill of Rights? So goes my spam filter, so goes the constitution? What's ironic is that the spam guys use 1st amendment to justify the spam (same as junk mail and the credit rating agencies).


> Are you seriously proposing free e-mail and/or a spam filter is a good trade for one of the major pillar Bill of Rights?

What? Where are on earth are you getting that from what I'm writing?

I'm saying that the Sixth Circuit has ruled that just because you use an email provider that scans your email contents for things like spam (or ads), you have not given up your 4th amendment right for that content to be secure against searches without a warrant.

What you quote is me arguing that your premise that contextual advertising is somehow distinct compared to scanning for spam both in function and legal implication is flawed. The next statement states that even if such a distinction could be made, the above quote from US v Warshak I is a perfect explanation of why agreeing to automated scanning of your email does not imply consent to an abrogation of your rights.

I really don't see how I can be clearer than "The government also insists that ISPs regularly screen users’ e-mails for viruses, spam, and child pornography. Even assuming that this is true, however, such a process does not waive an expectation of privacy in the content of e-mails sent through the ISP...."


What you quote is me arguing that your premise that contextual advertising is somehow distinct compared to scanning for spam both in function and legal implication is flawed.

Its flawed because that premise is at once irrelevant and falsely asserted. Neither a spam filter nor contextual advertising are inherent to private communication.

What is relevant to private communication is that it is private. If I CC larry page on a "private and confidential" e-mail to my lawyer, Mr page is a party to the conversation. It is no longer "private" nor "confidential". If every e-mails sent to a g-mail account is by default cc'd to Mr page, none of those communications are "confidential". By (statute) law, the senders are forfeiting attorney client privledge...by "opening" the communication to a thrid party. Its google's stated position that person sending an e-mail to a g-mail account has a no "reasonable expectation of privacy". And this is what that means. This means that google (wants to) treat your mail like Mr page is reading it, and it believes that users are in fact waiving their expectation of privacy by using or communicating with g-mail recipients. That includes presumably senders of mail who have not agreed to g-mails T&Cs (ie, who presumably do not have reason to know what they entail).

It is the insertion of an active third party into the communication which is a problem. Its a problem because it damages the inherent idea of 'mail' as a sender-recipient private relation (post office =/= an active recipient). And from here, the problems start.

In any event, I think you are missing the legal abstraction at the core of the analysis. Its not a problem you can wish away, nor is it one you can trust current statutes of case law to protect into the future. That is the nature of 'reasonable' modifiers; they are ultimately contextual. And here, we have self-interested parties strategically eroding the context of the 4th amendment, to the detriment of the the public at large.


You are off in the weeds.

> What is relevant to private communication is that it is private

This is not the basis for 4th amendment protections. You are also confusing things: attorney-client privilege comes to us from Common Law, not the 4th amendment and is not a good basis for discussing what is private, as there are many more restrictions on it (a warrant can almost never compel your attorney to testify against you, for instance, which is not the case for almost all normal communications).

The mere existence of a third party does not negate the reasonable expectation of privacy, otherwise no third party communication system would be safe from warrantless searches. What has long mattered is the reasonable expectation of privacy, which under current case law does not always but in many situations does override any details like the extent that a third party is involved in that communication (for instance, cc-ing Larry Page on an email does not make a message suddenly have no expectation of privacy any more than sending it to anyone else, as the limited list of recipients makes it on its face not for publication or public posting).

> It is the insertion of an active third party into the communication which is a problem. Its a problem because it damages the inherent idea of 'mail' as a sender-recipient private relation (post office =/= an active recipient). And from here, the problems start.

Again, this is wrong. The fact that there are people at the post office, people that could open your mail, people that do actively examine your mail for things like drugs or bombs does not negate your 4th amendment protections. Are you reading anything I'm writing? That's directly addressed in the quote three posts above this one.

> In any event, I think you are missing the legal abstraction at the core of the analysis.

This is just silly. What you are suggesting is that the third party doctrine has overruled all, and that merely using an email provider that scans for spam or looks for abuse has left you open for warrantless searches (which is almost all of them except ones your run yourself since open mail relays are virtually extinct). Not only have you provided no evidence for this belief, the ECPA says you are wrong for emails newer than 180 days, and it looks increasingly unlikely that the courts will agree with you for emails that are older.

You appear to be confusing Google saying that people sending email to users of gmail expect their emails to be handled by the machines that run gmail (or they should, because that's the only way it can physically work) with an argument about the 4th amendment. Breathe easy. That is not the case. Whether or not Google is breaching the plaintiff's expectation of privacy (and it would, again, be bad news for every email provider out there if they are found to), scanning your email is not publicly posting your email, and this tort case has no bearing on your 4th amendment protections from searches by the government. This was established in Katz v US 46 years ago, and remains true today.



Ditto for Google Drive, encrypt it with something like Syncdocs[1] to keep files private.

Encryption keys are like car keys - you need to own them, not Google.

[1] http://syncdocs.com


If nothing else, it reminds people that it's going on. Which is a good thing. Myself, with adblockXYZ installed, often forget or don't notice. And when these do come up, I'm grateful because it reminds me to stop being a lazy bastard and set up an email server (as I've done many times for others) for myself. I appreciate that.


I guess. There are plenty of other people in this thread who have brought up the fact that if your email service can read your email you are inherently vulnerable without quoting the plaintiff's attorney in a suit about the practice talking to ABC news. Talk about your lowest common denominator.

I didn't come here to be propogandized at, so, yes, I will object to the dumbing down of debate to appeals to emotion (especially on a subject that's obviously already so emotionally charged).


magicalist: It was a widely reported national headline because it came from AP. It was not a story by some perma-tanned fake news-anchor. Alternative sources, same headline. This is also very much a PR war, on both side (google, NSA, etc). In case you haven't also icked that up. Everyone is in the business of manufacturing headlines.

http://www.nola.com/business/index.ssf/2013/09/google_argues...

http://news.yahoo.com/google-argues-continue-scanning-gmail-...

http://www.nbcnews.com/business/google-argues-right-continue...

http://www.mercurynews.com/business/ci_24021944/google-argue...


Ok. Maybe you came to the wrong place then. Or maybe your perspective is off... Who could know such things?


Um, you do realize that pretty much every single mail service provider is "scanning" their customer's email to screen out spam, right? That's also an algorithmic analysis of the body of the e-mail, and it's providing a service that most customers appreciate (since the generally don't want to swamped by hundreds of Viagra and "make money fast" emails every day)


Spam scanning can be thought of a continuous stream without a given email assigned to a user whereas contextual scanning for the purpose of advertisement necessarily ties emails to you.

Maybe it's just a semantic difference but I would argue that that is a sufficiently big differentiator.


This quote makes it sound like Google employees (i.e., humans) are reading your email. It's an algorithm that is processing your email, as most (all?) of us are aware. It's not like there's some employee sitting there reading your stuff and then determining you're in the market for a new camera.


By your definition of 'reading', our providers router is reading your email as well. Why don't you sue it?


"Determine meaning?" I'm pretty sure there's no computer software in existence at this point that can read human text and "determine meaning"


They have the infrastructure for a fully paid version set up now with being able to buy more space for your google account. Maybe add a client-side encryption option along with no ads. It will make things like server-side search incredibly more difficult although.


Well, it reflects well on Google that they say this, but in the end, the only "truly secure" host is one that cannot decrypt your data even when compelled to do so.

Bottom line is, if a capability exists, it can be exercised, willingly or not, coerced or through oversight. Anyone willing to put data in the cloud should be conscious of this no matter what the provider says.


It's not really Google's call. If they're issued an NSL the opinions of any employees are irrelevant. Either they comply or someone goes to jail.

I, and many others, would appreciate it if they fought such things. And if they would fight the good fight on the policy fronts. But ultimately Google does not make or interpret the law.


I am sure there are people there who do care about security and this is a victory, but it is a small one and not one that really saves their reputation.

The problem is that the NSA presumably gets access to the information before it is encrypted, so this does not limit what the NSA can get from Google. What it does do though is possibly cloud the traffic to some extent regarding cracking stuff, but then the NSA could probably just disregard the traffic between data centers.

The real victory is that other companies are more likely to follow Google and this may have an impact.


Completely off topic but trying to scroll the way HN handles 'quoted' / 'code' bits is terribly painful. Has anyone solved this? I'm using 'Hacker News Enhancement Suite' but this problem still make me unhappy and constantly causes me to rewrite such blocks in my own posts.


So I guess, maybe, Eric doesn't have a high enough security clearance to know what is really being done then?


“This is a just a point of personal honor,” Grosse said. “It will not happen here.”

Some folks are inclined to distrust Google, but there are people here who really, really care about security.

You bringing us to tears Mr Googler. I remember the "personal honor" or whatever about being adamant in providing the best results (now full of Google crap and advertiser sites) about not mixing ads with content (need I show examples?) and in many pages everything is ads, trying to trick the users in clicking them. Oh and all that crap about doing the right thing, "not evil" or whatever.

Google as a corporation is a just as scummy, if not more, as other corporations and will do anything for a dollar. So I trust them. Not. Sure they are decent people there, just as they are at Oracle or IBM but most will go with the flow and even defend the new policy.


Personal honor is what Edward Snowden showed. Getting caught for selling out your users and then endeavoring to do something about it is not honorable. It's manipulative and pathetic.


My main reaction to this was, ummm, wait - google isn't already encrypting its data internally?!

-- off topic rant --

Such a weird discontinuity in all this ... Google was prosecuted and paid a fine, despite self-disclosing, falling on its own sword and issuing an abject apology, for accidentally sniffing some unencrypted data as they drove past. This was condemned at every level by government.

Now the government is openly sniffing and capturing everything, including our encrypted traffic and deliberately trying to crack the encryption, ... and they don't think it is the slightest bit unreasonable?

How can there be moral outrage about Google's offense and not about what the government is doing that is ten times worse?


Because most of the people outraged that Google supplied "-s 0" instead of "-s 64" when running tcpdump weren't quite bright or were not thinking it through? I've yet to hear of any intelligent reason to be upset about the WiFi collection thing.

And more precisely, it's the NSA, who has the job to break encryption. There was outrage when Carnivore was made public (late 90s?), then that AT&T room the NSA tapped that was leaked in 2006. By now, it's just taken for granted (by technical people anyways) that unencrypted communications are going to be recorded. You don't even need a state-level adversary to achieve this on a limited scale.


Well, to be fair, it's not really the same people at the NSA looking at captured data who were doing the condemning. OTOH, you have senior administration members saying that they don't spy on Americans, when of course they do, or senior German administration officials outraged at the NSA's behavior, when of course they were participating all along.

I really don't know how much of it is self delusion, how much of it is just perfectly logical mental gymnastics from their perspective, and how much is just the "this is what we have to do, even if it doesn't match what's in the law" perspective on display in the nytimes piece.


Just to clarify the discussion here, since the NSA is involved in snooping on internet users along many different dimensions, I think what is being discussed here is encrypting internal Google data being transmitted from datacenter to datacenter via private fiber optic cables. Recent revelations seem to indicate the NSA has set up fiber taps on various company's networks. This encryption would frustrate those tapping efforts.

Legal requests to Google for user data are not affected by this change. Neither is private data at rest, which is still presumably encrypted. Neither are other extralegal avenues the NSA has to infiltrate Google (employee co-operation or intimidation, exploiting zero days to get into corporate networks, hijacking security protocol construction, etc).


>Encrypting information flowing among data centers will not make it impossible for intelligence agencies to snoop on individual users of Google services, nor will it have any effect on legal requirements that the company comply with court orders or valid national security requests for data.

How does this do anything about pervasive NSA spying? The NSA has broken SSL and VPNs by corrupting the CAs and the VPN vendors.

What would really help is for Google to create a zero-knowledge tier of service and to charge users for using it to replace their ad revenue.


How does this do anything about pervasive NSA spying?

Google clearly suspects the NSA is installing devices on the leased lines they use for inter-datacenter communications.

Properly implemented, this will stop that.

The NSA has broken SSL and VPNs by corrupting the CAs and the VPN vendors.

I suspect Google won't use a commercial VPN implementation. Corrupted CAs can be bypassed by using self-signed certificates, which will be fine for communication within the same company.


Google don't have suspect anymore. The first documents Snowden published show that they are doing this.

PRISM program is for collecting intelligence within and with companies that have joined the program (including Google). Upstream is program is for collecting data directly from fiber. Analysts are free to use both.


I would assume Google is big enough and smart enough to competently vet and deploy trusted encryption techniques.


Against the NSA? The same NSA we ask to intercept Russian, Chinese, Japanese, German, and Iranian encrypted Internet traffic? (To name a few. I'm sure the French are in there too.) Russia and China, I am sure, have enough smart people working on their cyber defenses that the NSA would have to be very, very, very good to penetrate those defenses. And if indeed they are, then do you think Google stands a chance?


Do we think that Google (which boasts among current and former employees: Joshua Bloch, Guido van Rossum, Sebastian Thrun, Luis von Ahn, and that's just off the top of my head) might have some superstar cryptologists in there with equal or greater talent than people working for the Russian government? Why would that be so unbelievable?


I'd add folks like Adam Langley, Damien Miller, and Michal Zalewski to that list.


Don't forget Vint Cerf.


You think Google is just running all their datacenter traffic through a Sonicwall or something? Think about it.


Read up on the Crypto AG exploit. How many foreign governments still use Crypto AG gear for diplomatic communications?

Yes, they should be a lot more security conscious. But you might surprised how many trusted commercial vendors.


How is ssl broken when many different ciphers can be used?


Bruce Schneier recently suggested that encryption-the-math wasn't broken so much as encryption-the-implementation. The math is pure, abstract, and pristine, but the implementation is not. Hacks, lies, and backdoors. He strongly hinted not to trust anything you can't see the source for.


Do you suppose that a government team "responsible for identifying, recruiting and running covert agents in the global telecommunications industry" might be able to steal a private key from one of Google's many data centers? Without forward secrecy, the theft need not even go undetected for it to be useful for decrypting all the data that has been storing.

Or perhaps the Bullrun project had something to do with Bull Mountain, Intel's random number instruction (RDRAND), which was used by the Linux kernel for a while as a primary source of entropy (causing Matt Mackall to resign as maintainer of /dev/random, later reverted by Ted Ts'o). If RDRAND is indeed compromised, then keys generated on a machine that trusted RDRAND would have very low effective entropy for anyone knowing the secret. How confident are you that proprietary systems do not trust RDRAND or have other backdoors that could compromise their available entropy? (That could be an interesting reverse-engineering project.)

Whether or not there is any truth to either of these scenarios, I think they can no longer be considered conspiracy theory paranoia, and indeed have entered the realm of downright plausible.

https://www.eff.org/deeplinks/2013/08/one-key-rule-them-all-... https://news.ycombinator.com/item?id=6336505 http://thread.gmane.org/gmane.linux.kernel/1173350/focus=117...


Are they suggesting the NSA is tapping intra-data center communications? I hadn't seen that suggested before.

That's interesting. I hadn't considered that could be how Prism works, but it would make sense if these companies weren't encrypting those connections previously. Somehow I assumed they were.


Most companies have historically considered dark fiber (where nobody else's network gear is involved) to be secure enough. Passively decoding dumps of hundreds of gigabits or terabits spread over many colors of light (DWDM) into useful data was generally thought of as prohibitively expensive and therefore not a viable threat.

The routers that can handle those speeds don't encrypt the link itself, so the most common solution is to do per-connection encryption between hosts with SSL or SSH or similar. Do you run SSL when talking to all of your internal APIs, databases, etc?

What about between nodes in EC2, particularly between availability zones? Those are potentially subject to the same sort of sniffing without Amazon's involvement.


Amazon does have certification by said agency.


Google has datacenters all over the world, including in hyper-intrusive surveillance states like India. The NSA is not the only reason to encrypt long-haul private traffic.


Some datacenters consider things like MPLS labels as a secure boundary. That isn't an issue at Google scale, but google almost certainly uses public fiber at between many connection points.


Google has full unencrypted access to all private data from their users (because collating that data is the foundation of their core business) and the NSA has the power to lean on Google to provide them full access.

Not to mention that at the very heart of the NSA spying story is the allegation that Google e.a. provides access to said data willingly. And the only denial from both parties has been a mixture of partial admission ("but we're using proper procedure") and carefully crafted lawyer-speak (the infamous "no direct access" boilerplate denials).

This is just internal security enhancements being abused as a PR exercise. Google is trying use the latest revelations about the NSA to deflect attention away from it's own complicity.


Google has been cooperating with the NSA, I distrust Google, this looks more like damage control to me.


What US company doesn't cooperate with the NSA? Google responds to lawful requests by the US government as appropriate. It's up to US citizens to change the law if they don't like it.


Wait, how are we supposed to change laws we don't know about?


We know about the laws that allowed this.


If Google really wants to help then it should stop tracking their users, stop spying on me and stop trying to force me to sing up to G+, they cannot give data to the NSA if they stop getting it as simple as that, but, all the NSA has to do is buy the information from Google, after all, they sell it.


If you don't want to be tracked and you don't want to use Google+, stop using Google products.

Google does not sell data to anybody. They sell advertising slots.


You're still tracked if you don't use Google products.


Is not as easy to scape, they keep tracking you with cookies, with Google analitics and with whatever they can use it, they have all figured out.


Employees at Google are not tracking you.

Programmatic algorithms on some Google properties are processing your data to show you the most relevant content (search, G+ posts, mail, news etc) and advertisements. There is a big difference between this "tracking" and the kind of snooping that spy organizations do.

By the way I'll be surprised if all spy organizations and not just NSA aren't trying their best to get more information on certain people from wherever they can.


Their business - like that of most tech companies - is built on data (so is every business to some degree) they can't forgo data, that is stupid. If the NSA is your main concern then it's government policy you want to be changed, not the engineering decisions of private companies. Also worth mentioning that unless you're speaking of telecom companies there is no profit motive to cooperating with government but mainly legal obligations.


Their business is data, buy they have gone beyond that, they just don't want to know what I'd like to buy, they want to know were I live, my phone number, the places I visit, they want to know where I'm every single second of the day, screw that.


They're not forcing you. You can give them that or you might consider the benefits not worth it and you won't, it's not an NSA type issue.


They are not forcing me yet they are still tracking me, with dirty tricks like this one:

http://www.huffingtonpost.com/bob-bowdon/why-has-google-been...

yeah, the are clever when is about getting your info.


I can't believe traffic between data centers wasn't already encrypted.


Eh. If you own the whole fiber from place to place, you might be lulled into thinking the data never leaves your premises.


Yeah, there's always a dividing line.

Between two servers in the same rack? Between two racks in the same datacenter? Between two datacenters in the same physical complex? Between two complexes connected by fiber you installed yourself?

If the security state keeps on keeping on, I expect companies which care about privacy to keep tightening it in. One day not long from now it might be considered ludicrous to transfer data from one server to another server within the same datacenter unencrypted. One day not long after that we may perfect secure multi-party computation, and a server might perform meaningful computation upon an encrypted dataset without any ability to decrypt it.

The goalposts are moving.


If you own the entire datacenter (like I'm sure Google does in most scenarios) and you're having racks compromised, then you probably have much larger issues that crypto won't solve.


Datacenters aren't poofed into existence. The networking hardware could be compromised at the factory, which would compromise the datacenter's network security without compromising its physical security or any of the servers.


By that logic, the networking hardware on the NIC could be compromised as well, giving an attacker DMA capabilities on a server, too.


It's also computationally nontrivial to encrypt tens of gigabits in real-time. Quite do-able, but nontrivial enough to make it the sort of nice-to-have you'd back-burner if you were confident that you controlled the line.


Ah, but convincing folks to run SSL inside the corporate firewall leads me to believe that Google may have treated the fiber between datacenters as not actually leaving the property.

(Yes, it is a tough sell to get folks to run SSL inside.)


Fixing this problem may not stop suspicionless spying. But it will certainly make it more expensive. The public revelation that the data wasn't encrypted is surprising, though I had previously speculated on it. See https://news.ycombinator.com/item?id=6264415


Meh. My most importance source of data in Google's control is my email. They aren't doing much to help me protect myself there. My only wish is that they provide a stable hook for tools like Firegpg [1] to encrypt the email's plaintext.

Their constant tweaking of the textbox led FireGPG's developers to throw in the towel.

I understand that Google wants to read your emails to power their ads. I doubt the fraction of power-users that would enable FireGPG would put a fraction of a dent in their systems.

[1] http://getfiregpg.org/s/home


Instead of trying to reverse-engineer a proprietary compressed Javascript codebase that changes daily, you should use Thunderbird and Enigmail.

http://www.mozilla.org/en-US/thunderbird/

https://www.enigmail.net/


Or just use KMail (which integrates with GnuPG by default) on your shiny open-source Linux or BSD that you're surely using already. :)


drivebyacct2, you are hell-banned, and have been for quite awhile. Several of your comments have been insightful with show-dead, and I feel you should be aware.


What is the practical option for mobile devices? Everything I've found on iOS & Android is absolute awkward shit.


There are limits to what Google can do to protect your email. If you email someone outside of the gmail service, your message will leave their data centers and travel over the public internet, possibly unencrypted at various points (unless you GPG it yourself) depending on what the various SMTP relays along the way do.


This is not just about email. Its about your entire browsing history, location, texts, phonecalls.


I think it's too late. Google has shown that it can't be trusted, especially about privacy.


What do you mean? Care to share more details?

AFAICT, Google has been completely transparent about giving users control about how their data is shared. It's been ahead of the pack in protecting its users rights even going to courts to protect users.

Disclaimer:I am an Engineer@Google.


Can you provide some insights why the connections between Google's data centers was NOT encrypted until now?


Unfortunately, I'm not sure I'm the right person to share more insight. I don't work on the network team but data between data centers flow on our own network. Data between a client's machine (machines on external networks) and machines on our networks has been encrypted for a while. Data at rest on servers has been encrypted.

Before these revelations, the tech community in general didn't expect that we needed to encrypt all traffic flowing on our home/office LANs. Like the rest of the world, these spying revelations have taught us that we need to be much more paranoid than we were earlier and are now encrypting data on our own networks.

As a user of a lot of web services that are deployed on the cloud, I'd actually beseech my fellow tech community to do this too. All and any user data passed between any two servers (even on a backend, internal, local network) needs to encrypted.


Thank you for providing your insights, it is important to know that the data on the disks is encrypted. I know about the encryption (https) between the browsers and Google's services - Google was one of the first actually to switch the services to https.

But I have to say that I am still quite surprised that there is no encryption between data centres. Working from time to time for industrial customers, on business critical software, most of the time it is required to encrypt data between servers, even when the hardware is in the same building, because they are afraid of leaks/attacks from inside.

I think Google has to do some explanation to the public about their security. Though I do not know if it is not too late for some google users.


No company that makes it's money gathering and making money off of your personal information should be trusted when it comes to privacy.


That's ridiculous.

As everybody knows: It has been revealed that Google is one of the NSA partner companies (which should have been obvious to begin with, given the fact that Google is probably the biggest data hoover ever built).

This fact terminates even the last tiny little bit of "trust" we could have had in Google.

And that's really all there is to say.


This language is pretty problematic especially in context of these third party hosted services. If Google have the keys and the encrypted data, what do we know about the security properties.


But if they can encrypt the data so the NSA can't read it--that is, if the NSA can't force them to reveal the data--then why were they revealing it in the first place?


It seems like they're either encrypting communication between each server or between each data center: the article is very vague. Either way, your data is still unencrypted inside a server (which is obvious: you can't search through your mail without knowing the plaintext.)


I assume you mean encrypting data at rest since this development takes care of all in transit encryption, well, because of ads, analytics, other predictive data intensive services like "Google Now" etc.


You can still encrypt data at rest (on spinning or flash memory) and use it for jobs like these. It's just decrypted while in RAM for use.


I think it's still rational to distrust Google. Until there is more proof I will act as if I am watched at all times when using Google services.


I upvoted this because I want it to kickstart a movement among companies, so everyone increases their security, end to end.

But at least on my part, this doesn't begin to "impress me". So far they're only talking about encrypting data between servers and they've also recently talked about encrypting Drive storage data (why wasn't it encrypted in the first place?!)

They need to implement OTR or some form of end to end encryption with PFS for Hangouts, and it would be nice if they at least gave the option to have encrypted calls and voice calls with ZRTP in Hangouts. The button should be right there and obvious for everyone who wants to use it. But I'm saying it's optional only because I'm not sure how it could impact what they're trying to do with Hangouts, and if ZRTP works with multiple people at once. But if they can do that, then it should be by default for everyone.

I'm also not sure exactly what kind of forward secrecy they are using for Google search - is it really a new key being generated per session - or is it like a few weeks? Because I think I read something about "a few weeks".

I think all SSL/TLS encryption is almost useless without PFS so everyone should use it, when we're talking about the government. A single order from them and they could get your key for everything. That's just completely unacceptable! So every service should be using PFS.

If I were them I'd also seriously evaluate whether RSA 2048 bits is enough, and if there's any doubt that it is, then they should move to more bits, or if the whole RSA algorithm is in danger, then they should be looking for alternatives quickly.

When Google and others start doing that, then I will begin to have some trust in them again. All of these press releases so far, and the lawsuit to fight to only disclose (not stop) the mass requests aren't fooling me, and I hope they aren't fooling many others either.

Until then I'll be on the lookout for any new great service that promises that type of security, and I'll switch to them as soon as they're available, and recommend others to do it, too, both offline and online.

I hope Google and Microsoft and others aren't thinking that because I haven't "ragequit" their services yet, it means the whole NSA thing doesn't bother me. It just means I'm anxiously waiting for the alternatives to appear - which will appear. There is a crypto war (again), and I do believe the security community will win again, so it's only a matter of time.


</dev/null openssl s_client -showcerts -connect www.google.com:443

Includes in the output: Server public key is 2048 bit ... Protocol : TLSv1.2 Cipher : ECDHE-RSA-AES128-GCM-SHA256 (ie: not RC4, as long as your client supports non RC4 ciphers, uses ECDHE for PFS) and: TLS session ticket lifetime hint: 100800 (seconds) (session keys are discarded by the client every 1d4h, so presumably the server rotates them every 24 hours or so (4hrs to allow for clock skew, I assume, or to allow for the fact that people might be slightly late on something they check every 24 hours (eg when the wake up each morning)))

Nobody is going to make the change from 1024 bit keys to something else without first verifying that the new bit length is "secure enough" for a reasonable enough time (if nothing else, you don't want to have to go through the expense of the process of getting everything upgraded more often than you have to). Although you're right, it would be nice if they published their reasoning.

I don't know how to verify the security of hangouts. Looking at the webrtc standard, it doesn't appear to support encryption. There is also a lot of opposition to standardising encryption for webRTC because of "DRM" concerns. So I guess it's probably not encrypted, but don't quote me on that.

Disclaimer: I'm a Google employee.


Can someone enlighten me, what is going on with NSA these last days (besides Snowden of course) ? I hear about new leads that they can break mainstream internet encryption methods. Is this true, and if so in what level? What about open source encryption?


Do you trust Google to be able to secure your data more than you trust the NSA to be able to secure their own data from a single twentysomething?

If so, what you are saying is equivalent to Google being more secure than the NSA.


Google still "only" uses RC4 128bit to talk to virtually every browser.

Makes me wonder. Is RC4 strong enough? Is it their professional conclusion? Or something else?


the problem with google is simple: the government demands data, google provides because it must. encryption means the government at least has to demand.


That is great and all, but if the government wants some data from google.. they will get it one way or another.


Regarding people making comments about not seeing why google wasn't already doing this, it's common for datacenters to do DB replication over unencrypted channels, which is what was going on here.


I already moved everything away from Google. There's no way I'm ever going back. Trust is gone.


Trust to do what, and why would you trust anyone else more?


Who are you trusting now?


I'm using Riseup.net and an offshore email account. I won't recommend the offshore service by name until they upgrade their servers (it's been slow lately).


It's theoretically impossible for the NSA to decrypt the data. In practice, however, it seems they can. So what's the point of encrypting then?

Is Google thinking they are smarter than the NSA at cryptography?


I know what you're trying to say, but your wording is off. Something cannot be theoretically impossible but practically possible.


Don't worry, your wording is also off, something can be theoretically impossible but practically possible. Things can not be impossible but also practically possible, but for every theory disproved, there was a contrarian thing possible :)


I suppose it depends on the context. The only way this could happen is if you're talking about a bad theory. Whereas something theoretically possible can be practically impossible, even for a good theory, if real life situations can't match the conditions of the theory.

In this case the NSA didn't even do anything theoretically impossible. They did a workaround. They added backdoors, which violates the conditions of the theory. It's like saying I got through your unbreakable door by coming in through the window.


Well, it's all about the implementation. In theory, the implementation is flawless. In practice, it never is. It's sort of like saying there is software with no bugs: It's exceedingly rare.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: