A lie gets halfway around the world before the truth gets its pants on. This story made it 1/3rd up the front page; stories based on the most extreme possible interpretation of what seems to be Greenwald's own extreme interpretation of a slide deck covered the entire front page for days. Remember when "direct access" meant Palantir was a third party doing the actual collection? Just look at their name! Remember when all of Facebook and Google and Yahoo's denials looked "suspiciously similar"?
Someone on Twitter (sorry) said that Google and Facebook "looked like angels" compared to Verizon. That sounds about right to me, too. But what incentive do they have to do that when their reward is conspiracy-theoretic nonsense about how NSA has their TLS keys and 3rd party contractors are used to keep them from "lying" when they say NSA has no direct access?
>A lie gets halfway around the world before the truth gets its pants on. This story made it 1/3rd up the front page; stories based on the most extreme possible interpretation of what seems to be Greenwald's own extreme interpretation of a slide deck covered the entire front page for days.
There's definitely some extreme speculation going on--both from those trying to maximize as well as those trying to minimize this issue. It will take some time to get to the truth. This article is a perfect example of an extreme attempt to minimize this issue.
This article seeks to minimize the Guardian's original story by saying the Guardian is "walking back" their claims. That doesn't seem to be the case. This article cites a paragraph near the end of a minor story published days later as the passage where they "walk back" their original story. The intent of the paragraph seems to be to illustrate that both the "direct access" claim from the NSA and the "no direct access" claims from tech companies can both be true. The original article doesn't seem to be changed.
Beyond that, the Guardian never claimed the NSA had "direct access". They claimed that the NSA slides stated the NSA had direct access. The Guardian has not stated they read too much into "direct access" in the slides, and the original article is pretty clear that "direct access" is simply the NSA claim in the slides, not the Guardian's verdict.
There is another remaining issue: the original article claims access to "live communications", which has yet to be supported by a slide, but it would pretty much rule out the SFTP-only possibility that some people seem to be accepting as fact at this point. Maybe there is direct access to live information from Skype and Apple, but Google insisted on SFTP? We still have a lot to find out.
It could be that the Guardian did exaggerate. But it is far too early to conclude that with so many questions remaining. Not all the companies have described their systems.
One thing is certain: the Guardian does not seem to be walking back their claim.
How is running a story that defines "direct access" as a "dropbox system" where "legally requested data could be copied from their own server out to an NSA-owned system" possibly not a walk-back, given Greenwald's original story?
The original story is a report about the "direct access" claim from the NSA. People seem to have interpreted the article as claiming as fact that the NSA had a root password to all the companies' servers, but that's not what the Guardian reported. They simply reported an NSA claim.
This separate article covers the story that the original article broke. This paragraph in the article gives an attempt to reconcile the competing claims from the NSA and the companies. What makes you say this attempt should invalidate the original story?
> When the FAA was first enacted, defenders of the statute argued that a significant check on abuse would be the NSA's inability to obtain electronic communications without the consent of the telecom and internet companies that control the data. But the Prism program renders that consent unnecessary, as it allows the agency to directly and unilaterally seize the communications off the companies' servers.
> It is possible that the conflict between the PRISM slides and the company spokesmen is the result of imprecision on the part of the NSA author. In another classified report obtained by The Post, the arrangement is described as allowing “collection managers [to send] content tasking instructions directly to equipment installed at company-controlled locations,” rather than directly to company servers.
I agree that whether or not the NSA can pull records from these companies without the companies meaningfully reviewing each request is a very important detail.
"But the Prism program renders that consent unnecessary, as it allows the agency to directly and unilaterally seize the communications off the companies' servers" is a strong statement. I agree that it is probably not compatible with the details Google has divulged about its "SFTP and manually by human only" process. But that is only one of the many companies.
I understand the "slides or GTFO" attitude that I'm seeing in these claims that the original story is inaccurate, but I think it's a bit arrogant and premature. A journalist who has seen the entire slide deck continues to tell us that the nature of what the whole presentation reveals is more invasive than a digital lockbox with workflow management software where humans meaningfully verify, evaluate, and approve requests. He could have misinterpreted the slides, but I doubt he would stick to the report so steadfastly once all these objections arose if he were not pretty confident he understood the claims in the Prism presentation.
We shouldn't accept that the NSA can grab a user profile without explicit, individual legal approval from the company as fact yet--there's a lot more we will hopefully learn. And how true this is could vary from company to company. But it's silly to ignore that a credible voice who has seen the presentation is telling us something.
I would be slow to assume that anything is known for certain in this kind of "spook biz". I also don't assume that everything interesting has been released on the slides already. (For example, there's interesting reporting in the first Guardian story about what happened to FISA 702 request rates since PRISM was introduced which includes quotations from the slides, but hasn't got much attention seemingly because the relevant slide or slides have not been reproduced yet.) However, there a couple of things that make me fairly confident Greenwald is (or was? I'm not sure if he is still standing by his claim) wrong about this.
One is that AFAIK the Guardian and the WaPo both have access to all the same materials Greenwald has, and they have both been backing away from the NSA-has-root claim for some time. But an even bigger factor is how Greenwald defended his claim. If he'd said "there's still-unreleased material which proves me right, hold tight" that would be one thing. But instead he quoted the "collection directly from the servers" text and linked the new slide it came from, implying that the quotation unambiguously ruled out the drop-box/API interpretation and supported the NSA-has-root interpretation. But in fact "collection directly from the servers" is not at all unambiguous between the two interpretations. And even worse, the You Should Use Both slide, which Greenwald produced as his trump card, provides context which clearly undermines the NSA-has-root interpretation! In that slide it's clear that "collection directly from the servers" is being contrasted with upstream collection of IP data from the telcos. The fact that Greenwald evidently didn't pick up on this himself is pretty clear evidence that his understanding of the presentation is imperfect, whether because it's being distorted by his desire for a bigger and more damning scoop or just impeded by a lack of technical savvy.
The truth that has spread around the world is that there is a surveillance system in the US which spies on Americans.
If you feel journalists have misinterpreted the capabilities of the system, feel free to call the PRISM hotline for clarification and tell us what you find out.
I agree with your assessment of whether the Guardian claimed direct access, or if they claimed the NSA claimed direct access. But I don't like how they went about it:
"The National Security Agency has obtained direct access to the systems of Google, Facebook, Apple and other US internet giants, according to a top secret document obtained by the Guardian."
They're not any less guilty than other papers, though. "Categorical statement, according to Source" is a common construct in journalism. But I don't like it, because it de-emphasizes the uncertainty in the statement. Reversing the phrases would put the correct emphasis, I think.
The story they ran does not reliably attribute claims about surveillance to the NSA slides, but adds its own speculative claims and, later in the story, blurs the line between what the document is claiming and what the Guardian believes to be fact.
I don't see where the lines are blurred. If you're referring to what you expressed here[1], I'm not convinced. The story is very clear about the source of the information at the beginning of the article, and it then follows the common convention of not appending the repetitive "according to the source material" to the end of each sentence.
I suppose you could say that some of the statements are speculative in that they say "if the claims in the Prism presentation are true, then...", but I think that's speculative in a very narrow way. It makes sense to draw out possible consequences of the program as expressed in the source material that are rooted in fact and not in speculation.
The article is definitely not speculative in the dangerous sense; it does not say things like "direct access probably means root access to production servers" or "since this program costs only $20 million it's likely to keep growing".
It does make claims we have not yet seen evidence for, but there's no indication they're speculative...
When the FAA was first enacted, defenders of the statute argued that a significant check on abuse would be the NSA's inability to obtain electronic communications without the consent of the telecom and internet companies that control the data. But the Prism program renders that consent unnecessary, as it allows the agency to directly and unilaterally seize the communications off the companies' servers.
> There is another remaining issue: the original article claims access to "live communications", which has yet to be supported by a slide, but it would pretty much rule out the SFTP-only possibility that some people seem to be accepting as fact at this point. Maybe there is direct access to live information from Skype and Apple, but Google insisted on SFTP? We still have a lot to find out.
More denialism from tptacek. For those who care about the actual source, the NSA slide in question says "Collection directly from the servers of".... Viewers can decide for themselves whether the NSA or tptacek has greater credibility when it comes to describing the capabilities of the NSA system.
It's not "NSA vs. tptacek". It's "Greenwald's interpretation of NSA slide deck vs. Google". Google categorically denied Greenwald's report. The Guardian then began walking it back.
Nonsense. You have repeatedly insisted that the NSA document must be wrong ("nothing to see here, folks"), simply because Google denies providing "direct access". You do this by drawing a semantic equivalence between what the NSA asserts and what Google denies, and then assuming that the NSA is wrong.
This is not a reasonable position for anyone with a technical background. Because anyone with such a background should surely realize that the two statements are not mutually exclusive, and there are plenty of ways for data to be collected which the NSA might reasonable categorize as "direct" while leaving Google with plausible grounds to categorize as "indirect" or otherwise deny knowledge of.
Everyone, including The Guardian, now agrees that Google had "plausible grounds" to "categorize access as indirect", because that's exactly what their access was.
I agree it is a semantic distinction and that your characterization of what Google is doing is probably accurate. I disagree that holding this viewpoint gives you any grounds for attacking Snowden's credibility (as you have repeatedly done) or asserting that widespread claims of inappropriate NSA surveillance are implausible or technically impossible.
Why do people treat this sentence as if it is absolute proof of "direct access" to the company's servers and data?
It's easy to see how this sentence, in light of the entire "dropbox" thing, means that NSA grabs data directly from the "dropbox" set up and operated by the company.
There is ambiguity and room for interpretation in almost all language, especially the vagueness of a Powerpoint presentation.
>means that NSA grabs data directly from the "dropbox" set up and operated by the company.
the "direct collection" is a part of "FAA702 operations". The FAA702 is unrestricted collection of data of "non-USPER"sons, and in particular no individualized FISC orders required.
Now there is a choice - either Google combs their data, decides who is FAA 702 "eligible" and of interest to NSA and dumps the "non-USPER" data it has identified to the "dropbox" or the NSA does the combing/identifications itself (and if NSA does the combing - where it does it? on NSA servers attached to Google datacenters or does it transfer all data to NSA datacenter and combs it there?). What do you think NSA has chosen?
Individualised FISC orders are required. http://www.govtrack.us/congress/bills/110/hr6304/text (Or more precisely, FISC review and approval of individualised FISA 702 orders are required - the court doesn't actually issue 702 orders.) That's almost the only protection non-resident aliens have under FISA and the jurisprudence that upholds FISA, but the protection is there. It could be undermined by issuing millions of FAA 702 orders or one FAA 702 order covering millions of people, but we have reasonable assurance that this hasn't happened (yet) https://news.ycombinator.com/item?id=5865717 .
Yes, a FISA order, like a warrant, can identify more than one person. But again, in order to target (for example) every Facebook user, the 2000 FISA orders in 2012 would have to have covered an average of about 500,000 Facebook accounts each. That has probably not happened https://news.ycombinator.com/item?id=5865717 .
Again, for FAA 702 collection no individualized FISC required. 1 order for the whole Facebook, 1 order for whole Google, ... it seems that NSA does really need that server farm in Uta.
> Again, for FAA 702 collection no individualized FISC required.
That's not a FAA 702 order though. In fact it's in a different category to all the 70* orders, which fall under the "electronic survellance and/or physical searches" category in the https://www.fas.org/irp/agency/doj/fisa/2012rept.pdf annual report. The Verizon order would be a FAA 501 order, though people only ever seem to refer to it as a 50 USC § 1861 order. They're the "Applications for Access to Certain Business Records (Including the Production of Tangible Things)" on the annual report. These orders seem to be intended for things like the Verizon metadata, which it seems (IANAL) are considered to be unprotected by the probable-cause requirement even for USPERS. So I presume a 501 order couldn't be used to grab users' full private data from Google. In any case Google has denied that it has ever complied with http://www.wired.com/threatlevel/2013/06/google-uses-secure-... (or even been served http://googleblog.blogspot.ie/2013/06/what.html ) any order nearly as broad as the Verizon one, and Facebook and MS have more or less followed suit.
No, but then I also find it unlikely that they're actually giving away their Top Secret program by getting such data from Facebook through a secret order (whether it covers one person or many) and then handing it over for immigration desk staff to wave around.
Why is it a non-issue? One interpretation implies that the NSA is constantly accessing the data of anyone, anywhere. The other implies something incredibly smaller in scope and with a legal framework (which you may or may not like) behind it.
This seems to be the entire issue to me with PRISM - whether it's an unprecedented level of access or merely a statement of what has known to have been going on, and what was covered under FISA, for years, but just in a more technically expedient manner.
It is a non-issue because tptacek is inventing semantic distinctions (apparently irrelevant to whomever created the NSA document) to attack the credibility of multiple whistleblowers who allege widespread surveillance and abuse.
Especially given the fact that the leaked documents specifically encourage analysts to use a range of tools (i.e. "You should use both"), he has no technical grounds for suggesting that such a minor semantic debate (between NSA and Google) discredits the claims of multiple people with first-hand experience of the NSA who are coming forward with claims of its abuse of power.
Every USG veteran knows to be skeptical of mission briefs, regardless of important-looking classification markings.
That deck was put together by a mid- to low-level government program manager who owned the program. He is playing politics, making his program sound like the most awesome thing EVAR so he gets promoted. It's not an "official" document, despite all the fancy markings.
"stories based on the most extreme possible interpretation of what seems to be Greenwald's own extreme interpretation of a slide deck covered the entire front page for days."
The interpretation wasn't extreme, it was literal. What it looks like is that the NSA claimed to have direct access while the denials that have been make it seem like they indeed might not have it.
The thing is that the NSA said they had this kind of power to someone, in a power-point presentation. They have said that they wanted it. And the regime of secrecy today makes it extremely difficult to determine for certain what they do and don't have.
I mean, if the NSA director was being truthful to congress in saying they weren't systematically spying, he's being very coy now when confronted with apparently contradictory evidence. Possibly, like many secret bureaucracies, the NSA was as wedded to telling someone, likely a high official, that they were ubber-powerful, and were by the token, attached getting the direct access even if they indeed currently lack it.
I mean, I think the NSA is discovering the weakness of secret approaches - that it makes all denials and all limits implausible. Hopefully, this will force a situation where all the "secret laws" and such get done away with.
The exact quote is "Collection directly from the servers of..."
This - depending on say, the context and narration, can mean any number of things and considering the intended audience and what PRISM actually is, would've meant "we collect this data that's stored on Google's servers" rather then perhaps "we seize individual computers" or wiretap the information some other way.
Which is a pretty obvious way to interpret those slides! But of course this is the NSA - clearly the first thing we must to is ignore the simplest explanation and start proposing hardware backdoors in Intel processor instead.
Why is this tidbit such a big deal to you? I think you have unrealistic expectations.
Have you ever tried to get a story published in the "mainstream" media?
I've done technical writing. I also spent years explaining my pet issue to journalists, reports, policy makers, other lobbyists.
It's a miracle if the message gets thru mostly correctly.
You ever read a story about topic in which you're an expert? Then you know the press never get the story completely "right".
Even if lawyer Greenwald understood what computer expert Snowden was trying to explain, he'd still have a hard to time running that explanation past his editors. And I'm absolutely certain that whoever is reporting tried to explain novel ideas using layperson's language.
And yet most of HN took it as gospel truth and shouted down anyone expressing skepticism. Tptacek's post is chiding the credulity of HN as much as the weak editorial oversight at the Grauniad (which is capable of fact verification, and has historically been good at it).
And I don't even care about people's (clearly reasonable) belief that NSA had put into place an overreaching and possibly unlawful domestic surveillance system.
My problem is with people who appear convicted of the idea that Google's leadership are conspiring with NSA to deceive its customers, or that NSA is employing exotic and outrageous methods (like optical signal intercepts --- it's right there in the name Prism!), or that Palantir is somehow involved in Google-related surveillance, or that NSA is "disappearing" people... the list goes on and on.
I have a problem with the idea of conversations on HN reifying speculation that Google or Facebook are defrauding their customers; I also have a problem with bullshit stories clouding the very real problems we do have with overreaching surveillance.
This helps me understand where you're coming from.
I understand your frustration, but I think you're attacking the signal instead of the noise you're talking about.
There's definitely a lot of nonsense floating around (optical signal intercepts from these providers, etc.). This is noise.
But there is also a very real signal. A journalist with an inside source is telling us about a program that indicates some of these companies have given the NSA a level of access than most of us who know anything about systems would not be comfortable with.
While the degree to which this is happening has not been supported with evidence, numerous other claims have been. It has already forced the declassification of orders to the phone companies. This seems to be a very credible source.
Instead of attacking the noise, I see you attacking the signal. You're saying "These companies could not be doing this! Why would you believe someone saying they are!"
Maybe there is no such program, maybe the NSA doesn't even know what it's doing enough to make an accurate presentation, maybe the presentation was planted for the guy to find, and maybe the journalists are hacks. These are all possibilities. But with what we know, these are the extreme possibilities, and speculation about them is mostly noise. Defending hack-job articles attacking the source and messenger isn't increasing the quality of the conversation.
Why are you so invested in not getting the answers to the questions I'm raising? That's what your argument is: that my questions are unimportant, because they don't address your more important questions.
My view is that we will be getting answers to these questions soon. The journalist is closer to the answers than we are, but he would prefer the government and the companies disclose the answers in a responsible manner so he doesn't have to be the arbiter of what gets released or not.
It seems that from what we're getting from the government and companies, we're getting closer to answers, but we're not there yet.
If the question you're raising is about the nature of the Prism program, I am very much looking forward to an answer to that. But I think the best approach is to see the evolution of responses from government and companies rather than smear the journalist as a hack for not immediately releasing the whole presentation.
I'm not smearing him for not immediately releasing the whole presentation. I'm noting places where his own publication contradicts statements that he made, statements that Google and Yahoo and Facebook categorically denied, which are vital to the story.
I have a problem with the idea of conversations on HN reifying speculation that Google or Facebook are defrauding their customers; I also have a problem with bullshit stories clouding the very real problems we do have with overreaching surveillance.
Agreed. For future, I encourage you to state your objections plainly. Like this (above). And perhaps focus on the issues, facts, details and less about the players.
All the pennies (and beam splitters) have yet to drop. I would be shocked if the NSA wasn't coupling company cooperation with direct packet inspection/backbone wiretapping. They could even use the FISA requests as training data under some scenarios to help reverse engineer protocols.
You really want to bet that there is nothing more to see here, that more Room 641As don't exist? As for "conspiracy-theoretic-nonsense", a hidden conspiracy to wiretap hundreds of millions of Americans (derided as a "myth" by the NSA lawyer Rajesh De and lied about in front of Congress by Clapper) was just revealed -- because a man risked his life and freedom to leak the first FISA order in 35 years. Given that senior government officials are actually admitting in realtime to past untruths, it might be a good idea to be a wee bit less credulous when it comes to our government overlords.
Precisely. All of these claims about what the NSA is "not doing" are silly, and almost certainly false. It's not an "either-or" situation. It's an "and" situation.
The NSA is tapping undersea cables and sucking up e-v-e-r-y-t-h-i-n-g that crosses them AND getting data through court orders and national security letters AND getting all phone metadata from all phone companies AND tapping into the backbones at appropriate places and sucking up everything that crosses those places AND listening in to every satellite communication AND......
I want to bet that NSA does not have direct access to Google and Facebook servers. I don't have an opinion about NSA's influence over telcos (in fact, I expect you & I agree about the extent to which NSA exerts undue influence over telcos).
Google runs its own fiber. You want to bet that the NSA has, among its reported [1] "10 to 20" 641A clones, a tap into Google's trunk? I would bet on that. I would also bet on NSA operatives within Google, some clandestine. If Google can't identify Chinese spies with 100% probability, how likely are they to identify American ones?
Simple terms. If either of those two points come to light (NSA backbone taps of Internet companies or clandestine NSA/IC operatives planted in Internet companies, perhaps in foreign subsidiaries of the same) you publicly post that you were wrong/credulous and the "conspiracy theorists" were right.
After all, if the FBI infiltrated [2] the unimportant KKK, the US government definitely has an incentive to infiltrate the all-important Google.
Former director of the NSA’s World Geopolitical and
Military Analysis Reporting Group, William Binney, has
estimated that 10 to 20 such facilities have been
installed throughout the nation
Say what? Clandestine foreign signals intelligence is NSA's charter; it's the entire reason the agency exists. It does not follow logically that because NSA is executing its duties to its lawful charter, it must also be exceeding its authority domestically.
What? They most certainly are exceeding their legal bounds domestically. It's called the Fourth Amendment
And who even knows what the NSA's "lawful charter" is? Secret interpretations of laws (meaning secret laws) evidently mean it's ok for them to capture the phone records (and, as reported, credit card statements) of everyone. They lie about what they are doing under oath and they hunt, jail, and kill those who tell the truth.
They prevented these unconstitutional secret interpretations of the law from getting to the courts or to the public. That's the only reason why they haven't been struck down.
Deep in the oceans, hundreds of cables carry much of the
world's phone and Internet traffic. Since at least the
early 1970s, the NSA has been tapping foreign cables. It
doesn't need permission. That's its job.
But Internet data doesn't care about borders. Send an email
from Pakistan to Afghanistan and it might pass through a
mail server in the United States, the same computer that
handles messages to and from Americans. The NSA is
prohibited from spying on Americans or anyone inside the
United States. That's the FBI's job and it requires a
warrant.
Despite that prohibition, shortly after the Sept. 11
terrorist attacks, President George W. Bush secretly
authorized the NSA to plug into the fiber optic cables that
enter and leave the United States, knowing it would give
the government unprecedented, warrantless access to
Americans' private conversations.
Clandestine foreign signals intelligence is their charter indeed. But looking at the means they use in regards to their stated charter can give us hints as to how they operate domestically.
So if PRISM is just the tip of the iceberg, what method do you suggest for speculating as to what exactly they are doing domestically? Maybe, say, look what they are doing overseas?
The coverage has been ludicrous. The timeline was pretty much:
Day 1: We see some out of context powerpoint implying Google, Facebook, et al. have given the NSA full access to private info.
Day 2: We find out that wasn't true at all.
Days 3+: We keep filling HN with articles about how awful it is that Google, Facebook, et al. gave the NSA full access to private info.
The tech media really dropped the ball on this one. It's absolutely insane to think all of the CEOs who said they'd never heard of PRISM or given the NSA any special access were lying. The CEO of a large publicly traded firm would be taking a huge risk and be certain to be caught for such bold-faced lies.
It reminds me of a great joke Stephen Colbert told about George W. Bush at the White House Correspondent's Dinner. Paraphrasing: "he'll think the same thing on Wednesday that he thought on Monday, no matter what happened on Tuesday."
Very naive. Perhaps the CEOs are lying in order to avoid going to prison on trumped up charges?
"In 2006, USA Today published an article that revealed that Verizon, AT&T and BellSouth (since acquired by AT&T) were voluntarily providing the NSA with millions of call logs. It also said another landline provider, Qwest (since acquired by CenturyLink), refused to hand over logs without a warrant, and that the NSA had rejected Qwest's insistence that the matter go before the FISC.
In 2007, former Qwest CEO Joseph Nacchio was convicted on 19 counts of insider stock trading. During an appeal, Nacchio's lawyers claimed the charges were retaliation for Nacchio's refusal to go along with the warrantless surveillance program while he ran Qwest."
The notion that Joseph Nacchio, who enriched himself to the tunes of tens of millions of dollars by defrauding his shareholders when he sold his stock to them at prices he knew to be inflated due to secret information he possessed as an insider, is somehow a victim of NSA is another one of those "things people thing on Monday and Wednesday" --- except in this case it's a Monday 6 years in the past.
Or Martha Stewart. She was jailed for insider trading right? (Or technically for lying during an investigation into it.) Wonder what she wouldn't tell the NSA.
Very naive. Perhaps Joseph Nacchio, you know, committed insider trading. Perhaps his lawyer's excuse, that it was retribution from the NSA was, you know, a lawyer's excuse. And perhaps the sympathetic tech press regurgitates it like a fact.
Edit: and by the way, it was a jury trial. So not only was the SEC acting on behalf of the NSA, and the judge who sentenced him, but so was a 12 person jury. That's more believable.
The slide says "PRISM\n Collection directly from the servers of these U.S. Service Providers: ..."
How is it an "extreme interpretation" to read that as something "direct" is likely to be happening involving "servers of these U.S. Service Providers"?
Why should we bend over backwards to convince ourselves that what they really meant (but didn't say) was that they had indirect access only through intermediate layers of privacy-preserving systems that we architect from whole cloth in our own imagination?
It casts doubt on Snowden's knowledge of the situation. It makes it seem like either Snowden didn't understand what that slide is referring to.
Snowden's rationale is that he was in a position to understand exactly what was going on and that he leaked because he felt it was necessary for the public to know what he knew. Snowden himself contrasted his situation with Manning - he pointed out that Manning leaked hundreds of thousands of documents indiscriminately without possibly being able to read them all or understand their consequences or the risks and benefits involved in leaking them. Snowden said that, by contrast, he was very selective in what he leaked and he understood the issues completely.
But if one of the most serious claims that Snowden is making is wrong, it calls into question whether Snowden really had the knowledge that he claims he had.
It doesn't discredit Snowden that he didn't clear up the confusion that results from folks not wanting to believe what it literally says in the authentic deck. He says he was an IT guy, but not the one who personally installed every undersea fiber tap or interception device.
Snowden's undoing may be trying to have the baby half way. People are really pissed off about this and are going to be looking for someone to blame. I wonder if he's going to end up looking even worse than Assange in the end by assuming personal responsibility for the effectiveness of the selective editing and redaction.
Personally, I don't feel like the weight of this story turns on the specifics of the interception hardware and the "directness" of the access. There's a slide deck that says "Dates when PRISM collection began for each provider" and "Collection of communications on fiber cables and infrastructure as data flows past". And that's just the four slides that The Guardian and WaPo didn't feel were too hot to handle.
I really doubt Snowden's interpretation of the slide had much to do with the article revealing Prism. The article content is seemingly based only on the 41-slide presentation itself, not the claims of its source.
It depends on what you mean from "servers of these US Service Providers", if that means a FTP site that they've set up or if it means "full access to any server of the provider's".
The NSA gathers most available data about most people.
What does it matter which server establishes the socket connection?
I wrote the backend for the exchange of electronic medical records. We used numerous protocols: scp, ftp, http, etc. And numerous formats: csv, hl7, xml, etc. We had numerous partners: pharma, labs, hospitals & clinics, EMTs, CDC, etc.
In all cases, I'd feel comfortable saying we had "direct access". Because in all cases, the audience of doctors, nurses, execs, admins knew enough to make policy decisions.
Would you drop this chew toy if the Guardian had written "near realtime live data feed"?
"Direct access" at this point seems to be a semantic red herring. Until further details of the program are declassified, or, more likely, leaked, we won't know for sure.
I'm willing to bet that the truth lies somewhere in between all of this, and that Snowden and Page can both be standing near the truth, leaving the administration and NSA out in the cold.
And further, why the issue is how they have access to my private correspondence without a warrant rather than the fact they have access to my private correspondence without a warrant?
That's not how it works. I don't need to know how precisely the NSA "slurps up all the data" to challenge a specific claim the Guardian made about the degree of discretion NSA has in obtaining data from Google or Facebook.
You're right. Surely the leaked documents from the NSA must be falsified and Google et al is telling the truth. And there is certainly no room in the middle. And we definitely have nothing worry about.
?
I'd like to think you are more intelligent than this, but the only real other option is that you are trolling, and neither are attractive or plausible.
"Our story was written from the start to say NSA claimed this, telecoms deny-we wanted them to have to work it out in public what they do. We reported - accurately - what the NSA claims. We reported - accurately - what the companies claim. It conflicts. That's why we reported it."
You keep making these statements but they don't agree with your own links.
"The National Security Agency has obtained direct access to the systems of Google, Facebook, Apple and other US internet giants, ___according to a top secret document obtained by the Guardian___." (Em mine)
This is text from the message of yours which you are linking to here. It agrees with the quote bstrand provided.
When the FAA was first enacted, defenders of the statute argued that a significant check on abuse would be the NSA's inability to obtain electronic communications without the consent of the telecom and internet companies that control the data. But the Prism program renders that consent unnecessary, as it allows the agency to directly and unilaterally seize the communications off the companies' servers.
Complete paragraph quote. Both claims in the last sentence --- that NSA has direct access, and can unilaterally sieze comms off servers --- now appear to be true, so much so that the Guardian itself is now walking them back.
I'm disappointed, tptacek. You're a sharp guy. Why muddy the waters like this?
The deck identified specific providers as on board or coming on board. Their denials looked similar. The deck characterized access as "direct"--direct in some context. As a wordsmith yourself, you certainly recognize such semantic fuzz, the loose context-dependent coupling of sign and signified.
And yet, you collapse the possibilities down, giving credence to authority. You cast scorn and ridicule, you narrow the bounds of respectable opinion with language like "extreme interpretations" and "conspiracy-theoretic nonsense".
One cannot have an informed opinion on a secret program. The harder you try, the more vulnerable you become to information censoring. Try the holistic approach, and reserve judgement.
Why am I the one who needs to reserve judgement? I'm not the one saying things like "the name PRISM suggests optical intercepts on backbones", or "the name Palantir suggests that that company is acting as an all-seeing eye for NSA".
I'm not the only person with these concerns. Here's Karl Fogel of QuestionCopyright.org:
Forgive my uncertaintly, but what precisely do you have in mind when you say 'optical interconnects on backbones' here? Do Room 641A and "Collection of communications on fibre optic cables and infrastructure as data flows past" from the You Should Use Both slide
http://www.guardian.co.uk/world/2013/jun/08/nsa-surveillance... cover it? (Ironically enough the slide suggests that PRISM is the name for the non-upstream half of the operation.)
This is exactly about the content of the leaks. If the supposed leak is baloney then it is not misdirection to call it out. The debate over "direct access" is not a small detail. It is the difference between a major story and something that tells us nothing new. I think this was best explained here: https://medium.com/prism-truth/82a1791c94d3
Direct Access is a red herring. What I want to know is if they had full conent mirroring, MITM nodes etc.
Personally, I believe there is a possibility that the name PRISM itself is an allusion to the method of data slurping the NSA has been using. Where there are plenty of articles talking about their fiber splicing actions etc...
I take PRISM to mean they were taking streams and were able to focus on a particular 'wavelength' in the stream and mirror it to their own systems.
I don't think they had "direct" access to FB systems - even though there are plenty of former CIA/SS/Military (and potentially NSA moles) already openly working as actual employees of FB - I don't think they necessarily needed full and direct access - the NSA taps the ISPs directly.
I think given the "direct access" note in the slide, SV's weird focus on the word "direct", the name "PRISM", and the outing of AT&T 641A a few years ago it is definitely within reason to assume they're beam splitting the fiber signal at the ISP level. That provides plausible deniability to Google, FB, etc.
What I want to know is how much of that signal are they (a) storing indefinitely, (b) DPI or SSL decrypting, and (c) merely keyword analyzing. If they're storing it indefinitely and/or utilizing SSL MITM then we're pretty much done as far as privacy is concerned. But if they're just keyword analyzing cleartext packets then honestly who gives a shit.
My gut is telling me they're decrypting SSL. FB and Google moved to HTTPS everywhere a few years ago and they're clearly getting this data somehow so...
>"how much of that signal are they (a) storing indefinitely, (b) DPI or SSL decrypting, and (c) merely keyword analyzing.
I think this probably nails it - but in reverse order:
They peel a stream off, keyword analyze it - in conjunction with other weights (i.e. who is talking, to whom, between where, what medium, when (i.e. the meta data)) and then they store the key ones.
If I am talking to my Grandma, the drop it. If I am talking to an unknown number in [foreign country] they test for keywords, if any are hits - they store it and add more meta-data.
I mean - the whole thing is what I understood the Mythical Project Echelon to be -- but there was never any concrete evidence of Echelon to the degree that it was rumored to be, until now.
Now we know that they definitely trap any packet they can wrangle.
The tech side of HOW they are seeing everything is inconsequential to the fact that they ARE seeing everything crossing the pipes.
They do do "upstream collection", it says so in the released slides. However they also do "direct collection" from the webapp companies ("You Should Use Both"), so it doesn't provide SV with plausible deniability.
It's not semantics at all. The "scope" is infinite. The government can get pretty much any information that exists pursuant to a valid warrant or subpoena. That has been true since the founding of the republic, and was true in England for hundreds of years before that.
What matters is how direct that access is. Does the government have to submit a warrant to get user-specific data, and then gets that data back in a drop box like system? Then there's nothing illegal, surprising, or even sketchy about that. Can the government get direct access to Facebook's servers without going through their legal department? Then that's a big deal!
It's not true that NSA-has-root was simply Greenwald's interpretation. Barton Gellman and Laura Poitras took the same interpretation in the Washington Post's first story http://articles.washingtonpost.com/2013-06-06/news/39784046_... (the paragraph suggesting that this might be a misinterpretation was added later). Greenwald did also have a coauthor on the Guardian story too. What does seem to be true is that Greenwald backed down from the claim more slowly than WaPo or even the Guardian. Still the decision to focus on Greenwald here does seem like someone's rhetorical ploy, if it isn't simply glee at the chance to get at a much-disliked figure.
Eh, I'm just writing this incident as incontrovertible proof that hackers are just as mindless and irrational as any other group of people. I expect to be referring back to it in a few years.
Greenwald is our version of Richard Land. Not even interesting sociologically.
There have been several leaks from Snowdon, they are not limited to a single slide deck. I find it curious just how vitriolic you are on this single issue (direct access). We simply don't know the full story on PRISM or the other associated programs, we don't know how much of what the Google statement was true in a broad sense, and how much was simply legally true in a narrow sense. I would be very interested to know the real process (as would everyone here), as it could vary from highly automated with legal assistants clicking 'send all data' 100 times a day at the Google end without having sufficient information to actually check requests, to highly manual, with teams of lawyers assessing every request carefully with full information (though given the cavalier treatment of Congress by the NSA, I highly doubt this). However, I don't think it matters too much at present - what matters far more is the extensive and intrusive surveillance that the NSA feels it can pursue without informed congressional oversight rather than the details of one specific program.
What we do know is:
Snowdon leaked slides showing a Boundless informant program to catalogue data - almost 3 billion records collected over the month of March 2013 just from US sources - that's a huge amount of data for an agency that doesn't have a remit to surveill Americans.
Every phone call in the US is now being recorded by the NSA - that is almost the biggest story here, since we don't know if they also tap email headers, which would probably be worse.
Oversight of the NSA and other agencies is impotent, and most of congress simply wasn't aware of even the broad scope of the surveillance, let alone details.
DNI Clapper lied to congress with impunity over surveillance of Americans, the NSA lied over not having counts of records.
The NSA's standards are incredibly lax (allowing this leak to happen - he shouldn't have reached the front door with this data), and their interpretation of their remit worryingly broad (extending to collecting at least phone (and probably more) metadata on every single American and company). If the IT tech Snowdon had access to all this data, other countries probably have it already by other means.
Snowdon leaked slides on PRISM claiming 'collection directly from the servers' - this was presented by Greenwald and the Guardian as a claim to be verified and contrasted with Google's denial of direct access (I feel both are probably true - the slide in a broad sense, and Google in a narrow sense). The quotemarks in Guardian articles are there to attribute, not to undermine the content quoted.
Snowdon claims to have had access to anyone's email at providers like Google without obstruction at his relatively high clearance level (unverified) - I think he wanted to point out the lack of supervision of the process (hence ref. to president's personal email), and the lack of interaction with Google staff - the truth of this claim has yet to be tested and various important points (how quickly, what supervision, what sort of data etc) are elided. I'd like to hear more from Snowdon and Greenwald (or the US gov) on this.
If you are truly interested in all the issues raised by these leaks, you should address those very real and serious topics, not minor quibbles over whether a journalist's interpretation of the technical details of a transfer of records is correct. I can see why people might have jumped to conclusions over 'direct access', and do feel it's important to get to the bottom of the real process (I'm sure Google would love to tell if only to put the wilder theories to bed), but the reality without that is bad enough - given the many other programs we know about, and the admitted details of PRISM/FISA requests. I was a bit dishearted by the initial Google response but am pleased they are now pressuring the government to release figures for FISA requests, so that people can see the extent of the program as it impacts Google, but this issue is about more than Google and Facebook and records they might return on the basis of FISA requests. That would help define the scope of one of the many programs.
That reality of broad surveillance without adequate supervision is enough to put people off doing business in the US or hosting data there, and the acknowledged facts are enough to make it very easy for an unscrupulous president like Nixon to turn the US into a surveillance state and capture all the levers of power very quickly - something that should worry all American citizens. That is what Snowdon was warning against (see the last part of his video), and that is the most insidious part of this sort of widespread surveillance unchecked by public law and public courts - not how it is used today, but what it might enable if it is allowed to continue.
What I meant was the fact of the call is recorded (see leak on phone records), but I see how this is ambiguous - sorry about that and thanks for the correction. I also managed to spell his name wrong as Snowdon... I actually think recording the metadata associated over long periods (who,when,where) is as dangerous as recording all the calls, but didn't mean to muddy the waters further there.
although again this is unattributed and simply 'obtained by the Guardian', but the story was written by Glen Greenwald just before the Snowdon video, and I seriously doubt they have two sources with top secret clearance in the NSA.
And of course there are the assertions in his video, which I also consider a leak (though so far without details to back it up).
Direct access is only the "most extreme possible interpretation" to a technocrat.
For all ordinary people, i.e., most of the world outside HN, the "access" part is the element that constitutes the scandal. "Direct" merely hints at the method, of which the details are considerably less relevant to most people than they are to us.
If the NSA had received the data via flash drives attached to carrier pigeons it wouldn't have made any difference to the core of the story.
It definitely doesn't make the story "a lie", at least not to anyone else but lawyers and techies.
Direct access... FTP... you say chicken, I say fish. It's not that big of a deal. It seem like you're TRYING to make it a big deal, but it only serves to distract from the real issue.
What is a big deal is that my personal conversations and yours and your mother's are being recorded, read and stored to be used against us later. That is a big deal. Human rights. Civil liberties. Not living in constant fear. Those are the real issues.
Drop box that Google selectively places files into pursuant to court orders, backdoor that allows NSA to pick and mix its own selection of communications. Not that big of a deal, huh?
Steve Gibson suggests [1] that direct access means, direct access to their internet pipes, effectively tapping or mirroring their content for analysis. He says that "direct access" has more meaning to non-techies than saying they "are listening at the upstream router". Very interesting podcast too. The $20 million per collection point, is to build the secure secret rooms [2] at their [google, facebook, apple, microsoft, etc] telecom providers, slip the fibers, and for all the gear they need to do it. He even suggests this is legal, since it is the open internet and anyone is allowed to peer into the traffic, since this is not actually in any of their [google, facebook, apple, microsoft, etc] data centers, but at the peering level.
Honestly, I have to take anything that Steve Gibson says with a massive grain of salt. Remember, this is the man who claimed that raw sockets would destroy the Internet.
Given that his suggestions are 100% speculation, I'd be willing to put any amount of money on the line that it's nonsense.
I am not a network expert, but won't most of that data still be encrypted through SSL? If that is the case, it would be almost useless since sites like Facebook and Google have made the transition to defaulting to using SSL.
That may be true. However, the giant elephant in the room is the Utah data center.
This is a data center designed to, supposedly, store data on the scale of a yottabyte. I only say "a" yottabyte, because to assume even slightly greater than that is sheer lunacy.
That is freaking massive. If you took all terrorist cells and all terrorist activity for the history of terrorism and terrorist activity, you would not even touch a fraction of a percent utilization. We're talking rain drops in the ocean.
There is no way the NSA is merely watching the bad guys here. The data center is a few magnitudes too large for such a task.
I would assume right now they are merely recording all data, in hopes that one day they will have technology to quickly crack encryption. However, even without knowing what is said (the content), the metadata of connections gives plenty of information on what people are doing.
and yes, agreed - analogous on so many levels - a publicly admitted places where secret government things happen, which can be invoked to give an aura of reality to conspiracy theories true and false alike.
It's already been established that they redefine "collect" as a human actually viewing the data. The admit that they store roughly everything they can on everyone.
The yottabyte stuff is basically BS. HN user Jabbles found a possible source for that, which just says '''The target GIG supports capacities exceeding exabytes (10^18 bytes) and possibly yottabytes (10^24 bytes) of data.'''[0]
Exabytes are eminently reasonable; yottabytes are not.
A yottabyte would take 3.333×10^11 3 terabyte hard disks. A 3.5 inch hard disk has a volume of 376.77344 cm³. The NSA buildings in Bluffdale take up around 1.5 million square feet. That means just the hard discs for a yottabyte would be just a bit less than a kilometer high (901 meters). That's about a 100 meters taller than Burj Khalifa. I think it's unlikely it's designed for anything near a yottabyte.
Clearly they do not yet have a yottabyte worth of data. It would be silly to build a warehousing site like that and have it be full-up with data the day it first opens for business.
I expect these sizing claims (which presumably come from some sort of government statements about the facility) are based on a timeline on the order of ten years or more. A YB in 2024 is going to take a lot less physical space than a YB in 2014.
You'd need a 2-3 order of magnitude increase in storage density to get anywhere near reasonable. I suspect that's not going to happen in 10 years but I don't know the current and projected rates of storage density increases.
Based on that, it sounds like the yottabyte claims refer to raw, unprocessed data collected. Off the bat, I'm willing to believe in a 2 orders of magnitude decrease after data reduction techniques are applied to the raw data. My experience with data collected from telescopes was that we got about 100:1 reduction on the raw data versus what went into permanent storage.
The San Antonio site was news to me, though obviously no big secret since that book was published years ago.
Maybe they haven't filled it up yet. A quick search suggests the whole internet is on the scale of exabytes, so a yottabyte could store a million internets.
Well even ignoring the PRISM part and Google and all that, look on the slide again: "Collection of communications on fiber cables and infrastructure as data flows past"
Yes. Not only that, but Google and Facebook, both of whom have been tarred by the PRISM story, have gone way out of their way to make SSL/TLS more pervasive in the real world, to the point where their efforts are resulting in huge improvements to clientside software.
The problem is that cleartext can't be trusted in the cloud. The only way to really restore confidence is to enable truly private endpoint-to-endpoint secured real time and store-and-forward communication.
Email, however, would not be encrypted. As Google is one of the largest cloud email providers, this could provide a lot of information for the NSA. But I'm not sure Steve Gibson was correct in his assessment. Looking at that final PowerPoint slide from the PRISM deck, it seems that upstream tapping was occurring, but also mentioned PRISM as a separate program. It could be that PRISM was a related router tapping technology or something else entirely.
It depends on where the interception device is installed.
Most of the large-scale sites are doing SSL offloading, so one of the first things that happens is the traffic is decrypted. Often this happens in the front end load balancer.
If the set up is as the WP described:
> “collection managers [to send] content tasking instructions directly to equipment installed at company-controlled locations,”
and this equipment is installed behind the SSL offload devices, it would see all the customer data in the clear.
Nobody has ever presented any evidence ever that NSA has Google's SSL/TLS keys. Not only that, but Google has (a) pinned their public keys so that the browser binary itself can reject bogus- but- signed certificates, and (b) pushed heavily to enable forward secrecy in TLS, which means that even if you compromise their key, you can't decrypt sessions without being an active man in the middle.
That's one, but the basic mechanism --- deriving session keys from RSA-signed DH exchanges --- is old, and is simply a ciphersuite (Google deploys an ECC-version of it for performance).
That proposal is for an ephemeral, per gTLD client key and an example of mutual authentication that aims to defeat _active_ attackers (MITM).
Perfect forward secrecy in TLS is a bit different in that the ephemeral diffie-hellman key exchange sets up a shared key that is protected from a _passive_ attacker that observes the TLS encrypted communication and later gets a copy of the server's public key.
Not quite. Having someone's private SSL key doesn't necessarily let you read the contents of their SSL traffic due to forward secrecy. It does let you sign messages as if you were them, or MITM their traffic.
For instance, on HN, Chrome is currently doing this:
Your connection to news.ycombinator.com is encrypted with 128-bit encryption.
The connection uses TLS 1.2.
The connection is encrypted using AES_128_CBC, with SHA256 for message
authentication and ECDHE_RSA as the key exchange mechanism.
Using ECDHE_RSA, my browser and HN's server will agree upon a key to use for encryption using AES128 in CBC mode. Now in order to read what the server sends me and what I send to the server, you need to break the crypto:
1. Brute force the 128 bit key. This is.. probably not going to happen?
2. Via a weakness in the AES128 algorithm or implementation, you can simplify a brute force into feasibility (AFAIK, no such attack currently exists).
3. Via a passive attack on ECDHE_RSA, you could potentially guess the shared key efficiently and decipher our communications (AFAIK, no such attack currently exists).
So it's not quite as simple as recording encrypted information and obtaining the SSL keys. You need the server to actively remember the keys used for every encrypted connection, and obtain those, too. Or MITM everything and record the unencrypted data.
He says that for gmail for example, it would be encrypted from Gmail to Gmail user, but not Gmail to some other users, through SMTP, because then the data leaving Google is not encrypted anymore.
SMTP is inherently insecure which is why you are not supposed to use e-mail for confidential information. But SSL will still protect almost every other Google property from Search to Hangouts.
Although your post did bring up another question that I never thought about. Does Google even "send" email when it goes from one Gmail user to another? That could theoretically all be handled internally, but it never crossed my mind that they wouldn't use SMTP.
SMTP can be encrypted between servers (though not end-to-end) using STARTTLS, which is getting pretty common. If the mail's being relayed with STARTTLS, you wouldn't be able to read it by tapping the wire. Instead, you'd need to get access to one of the relays, since each one decrypts the message before reencrypting it for the next hop (or you could manage to get yourself inserted as a relay).
Looking at a random email in my Gmail account from a different Gmail user, it looks like they do use SMTP, or at least they are adding headers as if it went by SMTP. But both ends of the SMTP are at the same IP address:
X-Received: from mr.google.com ([10.229.72.135])
by 10.229.72.135 with SMTP id m7mr3900891qcj.17.1370903118607 (num_hops = 1);
Mon, 10 Jun 2013 15:25:18 -0700 (PDT)
It seems that what PRISM is to some people has slipped away from what was originally reported to a world of conjecture. What you are saying is possible (although I think $20 million is way too cheap) but it does not fit in with what we have heard from the media and the tech industry. The NSA also would have had to crack the SSL encryption algorithms for this to be useful, and that revelation would eclipse by an order of magnitude anything else that has been reported thus far, in my opinion.
If the NSA could crack SSL encryption (and why not, they are code-breakers first), or if they could simply acquire the keys (from the ssl providers, from google, from whomever), would they let you know?
If you were in their shoes, had immunity, guys with guns, and billions of dollars, would you not find the weakest link and exploit it? For saving the children from terrorists, of course.
If the NSA had teleportation (and why not, they engage in a lot of research and development), or if they could just make people invisible, would they let you know?
Was the wording on the slide "direct access to"? I remember there being a "directly from the servers", but that wording has to be interpreted in the context of what it means to someone in the intel community (like the NSA).
And I'm 80% sure that what it means in that context is that the source of the data received has no middleman. I.e. they pull Google's records straight from Google, not from a wiretap or SIGINT, just like for Facebook, PalTalk, etc.
The wording was indeed "collection directly from the servers". [1]
Robert O'Harrow et. al.'s followon article from the Washington post had more details [2]
Intelligence community sources said that this description, although inaccurate from a technical perspective, matches the experience of analysts at the NSA. From their workstations anywhere in the world, government employees cleared for PRISM access may “task” the system and receive results from an Internet company without further interaction with the company’s staff....
According to a more precise description contained in a classified NSA inspector general’s report, also obtained by The Post, PRISM allows “collection managers [to send] content tasking instructions directly to equipment installed at company-controlled locations,” rather than directly to company servers. The companies cannot see the queries that are sent from the NSA to the systems installed on their premises, according to sources familiar with the PRISM process.
So the Gaurdian appears to have done some very very shoddy reporting. But isn't this all a distraction?
FTP or direct acces, it makes no practical difference if the employee on the other end rubber stamps the requests when they get them. The real question is how much data can the NSA get and what procedures do they have to prevent the targeting of US person?
what checks does e.g Google actually run on the requests? If the procedure is email FISA@google.com and then some google employee rubber stamps it and sticks the data in an SFTP server, the NSA effectively has unfettered access.
Its clear that you don't need a warrant for targeting a foreign person, so the employee can't check that it came for FISC. Even if they did, FISC seems to be willing to rubber stamp things themselves. So aside from maybe checking if the account is typically accessed from a US IP, whats Google going to do? I pick on Google here specifically only because they have a reputation for trying to automate everything including a lot of customer support and I suspect that if they don't have any discretion on these cases, they may well have automated it.
Of course, maybe they didn't, maybe there are rigorous checks both at the NSA and at the receiving companies. But we don't know and we need to.
It's a pretty significant difference. Even with the rubber stamp, the US Govt is limited to what they've asked for and received.
With direct access, they could have pulled everything.
In the case of Verizon/AT&T - the Government has everything, and, in the event of a new law/govt/executive branch - they would be able to do anything with the data they had already collected.
I was very concerned that US Govt had access had direct access to Google/Facebook servers. I'm not particularly worried about their ability to do authorized requests for user information.
Hopefully Verizon/AT&T will now transition to the same place as google/facebook are - having those entire databases of all telephone data is ripe for abuse, regardless of what claims they might make about safeguards are in place.
For example: "Our legal team reviews each and every request, and frequently pushes back when requests are overly broad or don’t follow the correct process."
That does seem to preclude dragnet requests. But what is google going to do when the NSA asks for all the mail from "foo@gmail.com"? Is NSA going to give some justification for why the need access?
Now what happens if the NSA issues thousands of those requests? They are not broad, they are for specific people.
As to following proper procedure, it appears rather strongly that secret procedures allows the NSA to request a hell of a lot legally and may not provide much safeguard for preventing abuse. We don't know because the procedures are secret.
Again, the companies in question appear to have little choice in the matter and that probably absolves them of a lot, but that doesn't mean the situation is ok from the view of what they were compeled to do.
They may be lying. If there's a gag order, they may be required to lie. Or they may consider "review" not to necessarily be human review. Or the person writing that may simply not know. We don't know what's going on at google, and with no transparency, it's hard to have a lot of trust, particularly since there are many examples of government overreach in the past.
Sure, google and facebook and whatever should push back, but it's clear that if the government really wants to abuse its power, they won't stop it. Given the current witch-hunt on whistleblowers, it's clear that those in power do not appreciate being questioned. Trust is fine, but verification is better.
Actual transparency, not just words? That could be free access to their internal architecture (not exactly likely), or more plausible, external, independent auditors.
However, I think you're focusing on the wrong party here - I think it's a lot more reasonable to request this of the government than of companies. There's no reason not to require the government to publish the general structure of what they're doing in great detail, and to let others make up their own mind if it's overstepping its bounds.
In short: I want an independent parties to have free access and be allowed to verify what's going on.
The precise mechanism isn't important. The key question is: is there mass surveillance going on without warrants? And the answer appears to be yes. No one has disputed, for example, the Verizon leaks that started all this.
Technically the Verizon "leak" was a leak of a court order, so it has some legal merit. We can debate if it's fair or in spirit with the Constitution and 4th Amendment to issue such a wide-reaching court order, but it's not warrant-less.
"In one recent instance, the National Security Agency sent an agent to a tech company’s headquarters to monitor a suspect in a cyberattack, a lawyer representing the company said. The agent installed government-developed software on the company’s server and remained at the site for several weeks to download data to an agency laptop.
In other instances, the lawyer said, the agency seeks real-time transmission of data, which companies send digitally."
It's important to remember that there are many companies involved here. PalTalk's direct access could be a begrudgingly set up teletype of live chat transcripts while Apple could have provided a VPN connection and a root password. We don't know yet know what access was sufficient to be a part of the "direct access" Prism program.
It will be interesting to see what degree of commitment each of these companies showed to the privacy of its users. It appears that however Twitter was complying with requests, it wasn't as convenient for the NSA as Prism access...
The Guardian "backwalk" actually VALIDATES our worst fears, albeit with technical nuances to ensure legality. It's really just about the only viable way to implement this absent an SSL encryption crack (I would guess they're dumping encrypted communications from Room 641A waiting for an encryption weakness to be discovered, that way they'll have everything). As Snowden said, you could wiretap the president if you have a personal email.
Here's how it works (this is my opinion as a web developer, not verified details from Snowden leak):
1) Fancy user interface developed by Booz Allen Hamilton. Enter email address (good choice for a unique identifier, used as unique key in many databases).
2) Backend uses curl to send a request to NSA-certified web api on each service shown on the slide. This serves as a legally-binding FISA request either regarding a foreign agent - no court order required - or a domestic agent - secret FISA court order required (see http://www.npr.org/2013/06/13/191226106/fisa-court-appears-t...) and assumed to be fulfilled by the api.
3) Kick off NSA equivalent of gearman worker that checks contents of "dropbox"-like service from each company for updates.
4) Services (Google, Facebook, etc) automatically grant request without question as it is a legally binding FISA order. This saves them a ton of money and, hey, it's legal! They have some custom code that allows them to look up a user by their email address - almost guaranteed to be indexed in their database - join it to relevant data sets, and dump it to the "dropbox"-like system.
5) Fancy frontend shows progress bar, while skinny backend compresses retrieved data into zip file for easy download.
This is the most efficient, cost-effective way to do this without venturing into science fiction, ie storing a mirror of all the data which would be stupid on NSA's part. It still verifies our worst fears and answers the question as to how such a program can cost "only" $20 million per year as reported by the slides.
The trick is in the legal framework, not the technical details. This is why the FISA courts are secret.
Except that Google have stated that they have humans look into each request, and that there is no dropbox. So either Google's lying, or The Guardian/Edward Snowdon didn't actually understand what was going on (beyond the obviously true and worrying collusion with AT&T and Verizon).
>Google have stated that they have humans look into each request, and that there is no dropbox
I've been looking for this since I read your comment but can't seem to find it. Perhaps you could point me to the article to which you're referring?
If true that would be great, but to fulfill the constrains set by the information we do know, it is possible that most companies have drop boxes, while Google opted out. Do you recall what Facebook said about human review and drop box existence?
Edit: I also wonder how many requests are appealed. The human overseers you're referring to may be little more than mechanical turks and that data is returned in near-realtime.
I'm unclear on why the "drop box" matters, so long as it's being filled manually after expert review. In fact, a secure drop box is better than the alternative, which would be ad hoc distribution of sensitive information over the public Internet.
The problem is a lack of oversight. FISA courts are secret and allegedly a rubberstamp process (see link in my original post). If the program were to go through normal channels it would be ok.
In your professional experience as a web developer, maybe you could square this assertion:
Services (Google, Facebook, etc) automatically grant request without question as it is a legally binding FISA order
... with the court order the NYT linked to from FAS, where Yahoo is seen to go several rounds with the FISC after having received a lawful directive from NSA to initiate surveillance?
From exactly what evidence do you argue that Google (or any other Internet company) automatically approves all FISA requests?
A little snark is probably the politest thing you're going to hear if you're supporting your speculation that
4) Services (Google, Facebook, etc) automatically grant request without question as it is a legally binding FISA order
with nothing more than the fact that you're a professional web developer. Replace that with the equally valid 'amateur lion-tamer' just to get get a more disinterested sense of how it sounds.
>In a secret court in Washington, Yahoo’s top lawyers made their case. The government had sought help in spying on certain foreign users, without a warrant, and Yahoo had refused, saying the broad requests were unconstitutional.
Related
>The judges disagreed. That left Yahoo two choices: Hand over the data or break the law.
>So Yahoo became part of the National Security Agency’s secret Internet surveillance program, Prism, according to leaked N.S.A. documents, as did seven other Internet companies.
From the court order:
"After a careful calibration of this balance and
consideration of the myriad of legal issues presented, we affirm the lower court's determinations that the directives at issue are lawful and that compliance with them is obligatory."
Seems to me that tptacek is supporting my hypothesis
Are they? The Guardian has selectively released three or four slides out of 41. You can split hairs and try to come up with some contrived reading of the article that could be argued as accurate, but I think in reality everyone understood the article to be claiming that the NSA had unfettered access to the severs of tech companies.
they're reporters, not clairvoyants. their job is to tell you what the docs say. they told you, and they were right to tell you, because the information looked very interesting. you jumped to conclusions. and so they now have to apologize because you weren't as careful as they were?
they did their job as well as they could. they had what appeared to be interesting data, but instead of just saying, "wow, XXX" they were very careful to not go beyond what was said.
i jumped to conclusions too. but then i realised i was probably wrong. it happens. but i don't then blame the reporters who did their job reasonably well. i made the mistake, not them.
I think that Greenwald should've been more skeptical, regardless of how technical savvy he may or may not be. Look at the slide in question...in any other organization, that slide would be interpreted as a slide aimed at the newbies/idiots in a company...it pretty much literally says, using brightly colored bubbles: "Hey people, remember that we have two systems for collecting data, so please use both of them. We even gave one of them an easy-to-remember acronym"
Isn't it possible that a slide written for newbies (within NSA, or those who work with the NSA) might have also been written by someone who is not an all-star in technical communication?
The Bob Cesca item this mediaite post is building on makes the case that Greenwald saw what he wanted to see:
I’m going to put it all out there and let the chips fall where they may: I’m increasingly convinced that Glenn Greenwald’s reporting on the NSA story is tainted by his well-known agenda, leading him to make broad claims for the purposes of inciting outrage.
Greenwald's twitter feed of the last few days confirms this (in my mind) - many tweets praising those who are praising Snowden, comparing him to Ellsberg, etc, about how a survey shows X% of Americans think Snowden is a hero. It's not really what I would have expected of a journalist at a major newspaper.
How would access to an FTP server that serves all the data one wants not be considered "direct"? I really doubt NSA analysts need shell access to do their dirty work...
The point is that the companies decide what production data to deposit on these servers after their legal teams have reviewed FISA requests. According to the tech companies, these requests are individually reviewed and target individuals and are narrow in scope. And they push back when they are broad.
This detail is a major one. This would mean the NSA cannot simply log on to Facebook and query for whatever they want. It means the system is simply a way for companies to comply with FISA requests, something that they were already required to do.
So: even though Twitter doesn't use PRISM, there is really no difference between what the NSA can access on Twitter and what they can access on Facebook. Twitter just complies through some other manner.
I think that's a bit generous. They look over the request but if the result set is too large it couldn't be reasonably vetted by lawyers. Eg, they would have a look at it but we don't know how rigorous they are or could be.
Yeah, I think the main point is that this is simply a way of responding to FISA requests, which are completely independent of PRISM. In other words, there is no new news here. The debate over FISA itself is certainly quite valid.
Google has claimed these requests are infrequent and narrowly focused, and they have requested permission from the government to publish some statistics. I hope they get it.
Because the company has to provide the data (or not). The company can (must) decide to exclude data that doesn't have a FISA warrant, for example.
The threat to this model is the idea that a FISA warrant might not be required until monitoring has gone on for 72 hours, but that was just as much of a threat with the prior model, where the data were manually extracted and sent by the company instead of using the automated dropbox setup.
This hypothesis is certainly plausible, but I haven't seen any proof for it besides the self-serving statements of people who aren't under oath and who anyway aren't necessarily in a position to know for sure. Which isn't much, as proof goes.
Was Snowden under oath? Did well-known concepts such as Occam's Razor suddenly stop applying because a contractor with biases and motives of his own decided to drop a bunch of data on a WaPo reporter with a 72 hour deadline to publish?
Regardless of what prism is, if it's a reasonable system with oversight and consequences for those abusing their power, why is it secret?
There's nothing necessarily wrong with something like PRISM (especially since we don't even know what it is), but the choice about whether it's right or wrong belongs to the people as a whole, most certainly not to a few people happened to have grabbed that power.
What's the point of democracy and accountability if it's unclear what people are accountable for nor what you're voting on? This kind of system should never ever have been introduced in secrecy.
It's secret because of OPSEC, the same reason that essentially everything else an intelligence agency does is secret.
Does the public go down and tell the Admirals how to staff a warship? Or what controls to use when deciding to launch weapons? Those are life and death decisions where the military essentially handles its own oversight with Congress and government civil servants involved at the higher levels to handle public interest in accountability and oversight. Yet I don't see the public up in arms about that.
Now, if the people say they don't like Prism and don't want it then the NSA should gut it; that's the right of the people to decide.
But I wish we wouldn't be so quick to jump to the idea that the public must personally audit and review all such government programs as a rule, because as far as I can tell from the seats I've sat in the public has never actually believed that in general at all, and are normally quite content to allow their Congressmen and our shared values as citizens (for those actually doing the work) to provide that oversight.
Exactly, there's enough wiggle room in non-technical terms like "direct access" that both the accusation and the denials could be true. Does it mwan root access, SQL access, or if PRISM involved a search API (thin wrapper around the db) would that count?
Just to recap, PRISM looks like a web front-end to a shitload of other systems. This is why a relative noob like Snowden was working on it -- it was web work. It's also why he had such a broad overview of what was going on. He knew the direction and capabilities, if not the details. He was probably just hooking up APIs somewhere.
That's just guesswork, of course. I think it's easy for us (and the Guardian) to imply a lot of detail where none exists. I don't think the Guardian has anything to apologize for. It'd be great if we all got a better technical view of these systems. But asking to receive it third-hand through a leaker and a non-technical reporter is probably a bit much.
The Guardian could at least "apologize" for using "scare quotes" in the process of walking back their claim.
That's underhanded for any media publication that aspires to the idea of "journalism".
In addition they could have at least mentioned the idea that other theories emerged as to what the slides they presented might actually mean that way people would be aware that there were other valid conclusions that could possibly be drawn, especially by those with tech and government experience.
> The Guardian could at least "apologize" for using "scare quotes" in the process of walking back their claim.
Seems to me that the claim from the article re scare quotes is nonsense.
There are no scare quotes in the guardian piece. There are, however, lots of quotes.
The things that the article claims are scare quotes (the relevant passage "...That has allowed the companies to deny that there is “direct or indirect” NSA access, to deny that there is a “back door” to their systems, and that they only comply with “legal” requests...") are, if you actually read the guardian's article and see the context, clearly terms quoted from the companies' denials (given more fully earlier in the article), the point of the passage being to explain why the companies denials were true given the specific terms used.
The Guardian isn't walking back anything. The submitted article just takes another Guardian article about a different aspect of the story (article title: "NSA scandal: Microsoft and Twitter join calls to disclose data requests") and proceeds to call it a walk-back, which it is not.
No. Rather than issue a direct correction to their initial story which so falsely represented the facts on the ground that all of the US's largest internet companies simultaneously issued categorical denials, the Guardian ran another article that redefined the term "direct access", knowing (as they had to have) that their own original interpretation of the term, in black and white in Greenwald's original reporting, had been repeated as fact by numerous major media outlets.
British papers tend to do that not as "scare quotes" but because they often quote someone else's description of an event in their headlines, or refer to it in the article.
Simple explanation of the original error -- the person who put together the slide deck wasn't exactly a dev on the technology itself.
It does call into doubt the credibility of Snowden's OPINIONS about the scope of NSA technology and procedures.
The technology error -- or more precisely the lack of rapid correction -- isn't one of Greenwald's finer moments, but in general he's done such an awesome job on this issue that I give him a pass on that blunder.
And by the way -- $20 million budget always meant PRISM was only a small part of the whole. Not realizing that was actually a dumber error on Greenwald's part than him taking "direct access" at face value.
I can help but think about the following point when looking at this. Do people really care whether access was direct or indirect if their own personal records have all been captured and/or seen/analyzed? It sure wouldn't matter to me how they pried their way in - it would matter to me that they had done it though.
It would seem to be relatively obvious to me, that would I be interested in partaking in a bit of terrorism, I would be relatively sure that using Skype, Facebook or Gmail would be a stupid idea. I think I would have deeply suspected that that was a stupid idea before PRISM, and now even more so since Snowdengate.
The idea that this technology is being used for anything other than mass control is bullshit. I don't want to sound like a member of the tinfoil-hat-brigade, but sadly, I just don't see terrorists using any of the resources offered by the PRISM mentioned US corporations. I do see a very dangerous threat to democracy.
An interesting article was just published with the headline "Greenwald gives away the game on his PRISM claim." He is continuing to walk back his original claims of direct access.
I don't know, but it appears the message brigade is out in force now. Our initial shock is gone, and we are eager for more narrative. Now is the opportunity to cast doubt, to reframe, to discredit.
Bool is_NSA_authorized(int fisa_id) const
{
return True; // Our hands are tied by FAA 702.
// We really don't want to be doing this.
// But we have no choice, its "legal".
// I hope the NSA has some procedures
// to make sure this isn't abused.
// Otherwise they could get anything
// they want from us.
}
It may well have been . And thats not their fault if their hands are legally tied. Especially since they appear to be trying to actually disclose what they have to hand over.
My comment wasn't meant as a dig at Google, they are forced to comply with what ever the law actually authorized. My point was we need to know how much they can actually be forced to hand over and what checks there are on that power being abused.
I think it's possible that Steve Gibson is right on this week's Security Now podcast. He suggests that perhaps the NSA has tapped into the ISP just upstream of the 9 companies listed. Then, they copy all of the data coming into and out of the fiber going to the Google datacenters. This fits with the name Prism, and is corroborated by the 2006 revelation about the NSA-controlled room in the ATT building in San Francisco.
Edit: this does rely on the NSA having access to routers, not servers, so it still isn't exactly what they said
While certainly a possibility, that really wouldn't fit in with everything else we've heard about PRISM, including what the tech companies have said. It also wouldn't allow the NSA to view data transferred over SSL (which I believe is all gmail and Facebook traffic, at least for me) -- unless the NSA has broken the encryption algorithms.
Someone on Twitter (sorry) said that Google and Facebook "looked like angels" compared to Verizon. That sounds about right to me, too. But what incentive do they have to do that when their reward is conspiracy-theoretic nonsense about how NSA has their TLS keys and 3rd party contractors are used to keep them from "lying" when they say NSA has no direct access?