Hacker News new | past | comments | ask | show | jobs | submit login
A Case That Has Microsoft, Apple and Amazon Agreeing (bloomberg.com)
245 points by andore_jr on Sept 2, 2015 | hide | past | favorite | 182 comments



"There’s irony in any tech company confronting the government on privacy matters, considering how much heat many take for mining their own customer information and using it for advertising and other profitable purposes."

See, I don't find this very ironic. In fact, my only real issue with data mining and analysis by these sorts of companies is the way governments can demand this info without my approval.

If Microsoft or Google or Apple or Amazon offer me a service and state that "hey, we'll provide this service for no cash outlay but data you submit to our servers will be analyzed to tailor search results, advertising, and other behavior to your usage" I can opt into that knowing that I'm trading targeted ads for free email or hosting or whatever. If I don't think that's a good deal, I don't use the service. If I think "OK, ads are a fair price for this stuff" then again, I'm cool with that.

But just because I agree to let Google read my location to send me traffic warnings before heading out to work doesn't mean I want the FBI to grab that data without my knowledge so they can determine if I might be a troublemaker. Just because I agree to let Amazon use my Amazon searches to suggest other products I might want doesn't mean I want the DEA demanding that info to decide if the gardening gear I purchased was for tomatoes or growing cannabis.

I'm perfectly aware that you pay for the things you get, whether it's directly with cash or indirectly from advertisers who pay for access to your eyeballs. Those are things I can consent to or decline. But when people with guns and the ability to throw me in jail can demand access to that info without my knowledge, I'm no longer agreeing to the same thing.

It's like signing a contract where someone else has the ability to change the fine print after I've signed it.


>It's like signing a contract where someone else has the ability to change the fine print after I've signed it.

I don't disagree with your points about government access to this cloud information ... but this point would carry a lot more weight if these services didn't routinely get away with unilaterally changing their privacy terms.

I also really doubt people have well-formed beliefs about exactly which privacy rights they are giving up for these services, and government agents would probably be in a lot more trouble if they misled you about which rights you were waiving.


Last I checked, the government gets in zero trouble when misleading you about which rights you are waiving. I thought that was quite clear by now?



Sounds like you're saying that frequent privacy rights changes and people not understanding their own rights in some way justifies this ?


> But just because I agree to let Google read my location to send me traffic warnings before heading out to work doesn't mean I want the FBI to grab that data without my knowledge so they can determine if I might be a troublemaker

Constitutionally, privacy is a pretty cut-and-dry concept. Information is either private or not. Private information isn't "information I don't want the government to have" it's "information I don't want anyone else to have."

I'm not saying we couldn't restructure the law to express what you're talking about. But it's not just a matter of extending our existing principles to this new situation. You're throwing the concept of "privacy" on its head by saying that you have this information that's "private" but that at the same time some third-party entity (and its employees) actively sifts through to target advertising to you.


Constitutionally, privacy is a pretty cut-and-dry concept. Private information isn't "information I don't want the government to have" it's "information I don't want anyone else to have."

You are incorrect. Your line of thinking runs in direct opposition to Roe V. Wade.

You can share information with your doctor that you specifically do not want the government to have. There's nothing special about doctors, in a constitutional sense. They are a third party providing a service and it is most certainly possible to have shared information with a third party that is still private.


I'm talking about "privacy" in the 4th amendment sense.

The Griswold "privacy" line of cases is pretty much totally inapplicable outside the reproduction/sexual activity/family planning context, largely because they conjure up a "right to privacy" that doesn't really exist in the Constitution.


I'm still not sure about your claim that private information consists _only_ of "information I don't want anyone else to have."

You can share information with another trusted party and still have an expectation of privacy (such that the government needs a warrant to compel access to that information). Conversely, you can have information that you don't share with another living soul, and the government can also compel access to that information - e.g. a warrant to search your private belongings. The standard for how the government can access your information doesn't automatically change depending on whether you've shared it with 0 or > 0 people.


> You can share information with your doctor that you specifically do not want the government to have.

Sure, you can—and your doctor will reveal that information to the government if required to do so by mandatory-reporting laws.


Confirmed, (although there may be state law protections, depending on the state). There are similar issues with attorney, accountant, priest, psychotherapist, etc.[1]

Spousal privilege is US federal common law and is complicated because many states have laws that either support or override the federal version.[2]

[1] https://en.wikipedia.org/wiki/Privilege_%28evidence%29 [2] https://en.wikipedia.org/wiki/Spousal_privilege#Communicatio...


You're talking about secrecy, privacy is not secrecy and I don't think you understand what privacy is.

Are you a homosexual or a transgender? Did you ever have an extramarital affair? Do you have HIV, or diabetes? Do you have a criminal record? Are you a muslim or a jew? Have you been abused as a child? Do you have a history of drug usage? Are you in a lower caste? Are you a war veteran? Have you ever spoken against your government to your peers? Have you ever gotten drunk at parties? If you have kids, have you ever taken pictures of them naked?

Depending on context, like the community you leave in, or the mood of your government, there is plenty of criteria for discrimination. In the right context, some of these can get you fired, some can get you killed.

And privacy is not secrecy, but rather the right to not be monitored. By not being monitored, even though you cannot hide your identity, you can control what facets of yourself people see. Privacy is about having control. On the other hand by being monitored, you're on file and likely to get fucked in the future by traits that today may be innocuous, as history shows. And you can also be blackmailed. And skipping the doomsday scenarios, most government institutions are filled with incompetents and do you really, really trust those incompetents to handle that data? I could tell you stories.


All of the things you said are interesting but have little to do with the privacy protections in the US Constitution.


And which amendment would those privacy protections be located in?


> Constitutionally, privacy is a pretty cut-and-dry concept.

That's true. Privacy is anything you kept on your own person or in your own property.

Emails however, are stored on another person's computer and therefore have no expectation of privacy. (some 1980s case IIRC).

Constitutionally, privacy is a cut-and-dry concept for the 1800s when the Bill of Rights were written. The fact of the matter is, the people who wrote the Constitution had no concept of "Cloud Computing" or "Computing" in general.

So it is up to the job of today's Politicians and Judges to figure out what to do in these cases. And that isn't a very easy job to do.


Mails are stored in another person's truck, and yet privacy applies.


Unfortunately, when in 1980s when the last Federal Cast happened, Emails were stored on your personal computer inside of your house.

POP was the primary protocol back then, not IMAP. IMAP wasn't invented yet. Therefore, the law lags and even today in 2015, the laws are being used as if emails are stored on personal computers.

Which means Emails are considered "abandoned property" if left on an external computer for more than 6 months. Abandoned Property is not subject to privacy.


Man, I just spent like 5 minutes looking up "Federal Cast", since it sounded fascinating. Then I just realized, maybe you meant "Federal Case"?


Crap, too late to edit now. Lol. Sorry about that.


Wow, I had no idea this was the case. Thanks for sharing. I seem to remember that you're a lawyer so I'll have to assume you know what you're talking about. :)

> some third-party entity (and its employees) actively sifts through

Just to be clear, is this condition important to your statement? That is, can you store information with a third-party entity with the assumption that it's constitutionally "private" there if you don't let them sift through it? Say, a bank safe deposit box, or some sort of encrypted enterprise crowd storage, or cperciva's tarsnap?


I admit my post was more a personal complaint/rant than a legal argument. I guess my concern isn't so much about privacy as it is about the ability (or inability) to decide on at least a general level where to send information.

Put way-too-simplisticly, if someone said to me:

"Would you let my widget track where you drive in exchange for a discount on fuel based on mileage?" Maybe I would say yes. Maybe I would say no.

How about if the offer also said that "the way we pay for this is by scanning the data we collect. That way we can sell ad space to businesses near your common driving areas. They never get your actual data but a shop on your morning commute will pay us to show their ads to people that may pass by." Maybe I would still agree. Or maybe I'd think, "eh....I dunno. Raw data for discounts is one thing but analyzing it for third parties seems sketchy." Again, my choice in general even if I don't ever know the exact workings of the software and transactions or any of the third party businesses buying ad space.

I think of it more like contract law and knowing (at least broadly) what you're agreeing to before you sign.

To continue this ludicrously oversimplified example, let's say I agree to the terms because on one hand, all this targeted advertising seems a little tacky or creepy but on the other hand, it really does beat getting blasted with ads for stores in other towns or items I'd never buy. Plus hey...the discounts on gas mean I'm spending $50 less every month!

Well, without my knowledge, the local Drug Enforcement Agency has decided that the best way to stop trafficking is to either secretly demand all the data collected by this ads-for-fuel service or just install some dodgy new data collection towers that connect to the tracking widget and intercept it all without anyone's knowledge.

That is the point where I have a problem. It's not that I expect complete privacy in public. If law enforcement wants to get a warrant to track my car because they think I'm a criminal suspect, that's something that can happen. But to demand (or just take) mass amounts of data that was only volunteered for other uses is just not cool.

Maybe my animosity comes from thinking that if the state wants all of this info, they ought to develop their own ways of getting it, rather than just taking advantage of legit services that people choose to use. Or maybe if they just came out and said "we're putting a tracker on everyone's car" people would go apeshit.

It's like if you can't convince everyone to submit to tracking by the state/law enforcement, just wait until someone else develops a service everyone wants to sign up for, then demand or steal their data.


> I admit my post was more a personal complaint/rant than a legal argument. I guess my concern isn't so much about privacy as it is about the ability (or inability) to decide on at least a general level where to send information.

Just to clarify, I'm not trying to make it into a legal argument. I think your point is well-taken and widely shared. I'm trying to explain what forces are in play and why it's so hard to make progress on what is a pretty prevalent vision of privacy in the tech community. It's not just a tweek, we're talking about retooling some pretty fundamental assumptions about the powers of government.


> Constitutionally, privacy is a pretty cut-and-dry concept

You clearly don't understand the US Constitution or existing US law. Saying it's cut-and-dry doesn't make it so.

> Information is either private or not

I'm sorry, I made a mistake. You don't understand privacy as a concept at all. Privacy is about access controls.

What's more all this frothing about privacy is not the issue under consideration. The issue is about extranational data being subject to US law because the legal entity in control of the data, is a US company. Other countries have similar requirements (Want to run a cloud service in China? Set up shop in a chinese data center, no AWS for you), and it makes sense to me as a nationalistic interest. What's more, you have these companies deflecting HARD "timeless values should endure. Privacy is a timeless value" which is a red flag. I don't care that the French have a firm grasp of privacy, this is a US issue. These companies are free to leave the US if they really believe this is wrong (they wont).


Please be civil.

BTW, the person you're replying to is a lawyer. Are you?


>BTW, the person you're replying to is a lawyer. Are you?

I'm a unix systems administrator. That doesn't mean everything I say about unix systems is right.

I'm not saying that said lawyer is incorrect about anything, because I honestly have no idea - but if we're asking people to be civil, we should also probably refrain from logical fallacies and defend him on the merit of his statement, and not his job title.


Hot dang, that last line is insightful.

Some people will argue that whenever you let someone have data about yourself, you're implicitly giving access to practically everyone by virtue of often imperfect security.

I believe that those people are full of shit.


> whenever you let someone have data about yourself, you're implicitly giving access to practically everyone

I believe there is truth in that. The way I see it is you don't have much control over information entered into any computer. You can practice good security hygiene and that means you can trust information on your personal network, but that really that means you trust your combination of OS vendor, browser, antivirus, router manufacturer, etc. A hole in any one (or more likely a combination) of these could lead to someone gaining access to your data. This applies even more to info stored in 3rd parties/data sent over the net.

Then you're at the mercy of the people who stole your data, they could very well release it to everyone, or sell to the highest bidder (and you wouldn't even know).


If we acknowledge contagion by imperfect security, then please acknowledge by law the right of entering false identites on facebook and false information by electronic means, as a means to protect our privacy.


Why should there be a law about this? If you don't like Facebook's rules, you don't have to use it. Nobody is forcing you to get an account.


OH MAN.

No, no, no.

This is like saying, "Nobody is FORCING you to have a mailing address, if you don't like the Postal Service, just don't have an address!"

Sure, you CAN do so, but if you do, you forego significant opportunities and can't make use of critical infrastructure if you do- and indeed in that example, there are places it literally is illegal not to have an address. Facebook is approaching that level of ubiquity and 'expectedness' in the U.S., at least.

Please, please, please don't try to enforce the 'take it or leave it' mentality for ubiquitous software services. It is every person's right, privilege, and DUTY to complain vociferously when companies are stupid and implement stupid rules- hopefully resulting in the clarifying, changing, or removing of said stupid rules.


That's like saying that, by letting the bank hold your money, you're implicitly giving bank robbers the access to the money, by virtue of imperfect security.

And since your own house's security is also imperfect, you are also giving everyone access to your house and properties.


Not that I agree with them, but this analogy doesn't hold. What if the bank in question is in a nation where criminals vastly outnumber and over-power public and private security?

The matter really is quite a bit different when dealing with information than with tangibles (or mutable information representing scarce quantities) because of how easily information can be copied and how you can't recover privacy. (e.g. how silly it is to say that "Our legal counsel is working to ensure that all copies of these photos are deleted from the internet.")

If you start with the reasonable premises of "Information can be copied" "I cannot trust the information security of a third party to be perfect" you only need to add a dash of absolutism ("imperfect security may as well be no security") to get to "Information I cannot risk certain third parties having is information I cannot risk any third parties having". I don't think you're advocating for swapping that with "imperfect security is all that can possibly exist so is by definition good enough" but in the case you are, that's equally absolute and incorrect.

Absolutism with information-theoretical assessment of security risk is pretty common (for good reason) in corners of industry so it should be expected as guaranteed that in discussions people will express that kind of position (on top of the human tendency for absolutism in general).

The difference is simply that when making practical choices about behavior, you must employ cost benefit analysis with expected values.

Will I avoid using services that track my usage? No. Would I while I'm doing something illegal? Probably, if its not very inconvenient. Will I encrypt a private key before I upload it to my backup host? Yes. Will I revoke that key if I find out my backup hosting is breached? Probably. I'd have to do the math on how likely I think it is attackers have the key, will expend effort to brute force it, will succeed vs the pain it'll be to rotate that key.


Actually, my problem with giving someone else data is that apparently US law considers data given to a third-party to be essentially a free for all for law enforcement agencies. There is zero protection for anything you have at one point shared with a third-party company, even if a device did the sharing.


As long as there is a data silo that is not under your explicit control you have indeed implicitly granted all parties willing to invest enough to break into that silo implicit access. Where exactly is that full of shit? The problem isn't the implicit agreements. The problem is lack of control and centralization.

P.S.: I'd also tone down the language. We're having a discussion and calling someone else's opinion "full of shit" defeats the purpose of having the discussion in the first place.


Who says that you can protect your own data silo better than the experts at company X against these 'parties willing to invest enough to break into them'? In the general case, people are notoriously awful at security.

Perhaps you, dkarapetyan, can keep a secure silo, but what about everyone else? Do they have no data rights just because they haven't studied cryptography for 10 years?


Why is it that data under your control is immune to third parties "willing to invest enough to break [in]", but data trusted to an otherwise reputable service provider is not?


If you're the government, it is easier to milk the relationship you have with telcos and the tech industry to get them to hand over data voluntarily/through the court than it is seek out a warrant to confiscate physical property in a suspect's possession which makes it obvious that they're being investigated.


I apologize for the strong language, my intention was not to quash discussion, but to stir it up, I have many thoughts on the topic but only a short time to articulate any of them- a spirited "I disagree!" was all I could budget time for at that moment. :)

I argue that part of the solution to this problem is the need for rules and standard practices. And moreso- the big problem and big deal here isn't even my own data, but rather the data other people gather on me. It's not that I want to keep my trove of My Little Pony erotic fanfiction secret- it's that when I do a Google search for incendiary dildos, I'm perfectly fine with google using my interest to target ads to me (they can inform me on the latest incendiary dildo technology!)- it's that I don't want my employer, the IRS, FBI, CIA, etc. to use that information to employ some hodge-podge machine learning algorithm on a dataset with my interest in incendiary dildos included (and therefore decide that they'll target me, because their algorithm indicate correlation between incendiary dildo interest and terrorism).

In other words, no amount of data siloing will protect you from this, dkarapetyan, it's your movements and actions in the digital world - and companies' observations of them - that we're talking about. There's simply no way (that I know of) for me to both communicate to Google my search terms and receive services, have them be able to use that data to target ads to me, but also have them not store that data on me in a way that a government entity could demand it from them. Of course, we're nervous about hackers from, say, drug cartels doing something similar, but we depend on responsible storage practices to stop that- we can presently safely stop the thief at the window, but the guy that shows up at the door demanding ransom or our family dies, we've got nothing. In that metaphor, because Google has to have access to the data in order to make it useful (and make money with it), someone with any kind of legal authority can send google's executives to jail until they comply and hand over the data.

The ransom metaphor may represent the crux of the whole debate. There will always be some risk of third parties gaining access to data, but some people use the 'implicit' argument to assert that because a bad actor could gain access to the data, that a government entity should be allowed to access/collect the data legally without a warrant. (That, by the way, is a more exact representation of the opinion that I called 'full of shit'.) The REAL risk, and the one where the ransom metaphor applies, is in an entity that can legally request such information and misuse it. Google, Microsoft, etc. can fragment and prevent most such truly valuable and extensive data collections from being breached fully but they are completely powerless against an aggressive intelligence agency insisting they pass on the data (and keep quiet about it) "or else".

Because the companies derive great value from our data, they want to keep it safe and use it for approved purposes- and license it or sell it to others who will do the same. We (and they) have tools to keep it reasonably safe for such purposes (not perfectly, but primarily with the idea that it would take many breaches to access all the data for even a portion of the customer database system). So we give them a meager amount of trust! But all those protections are NULL if the legal system allows government entities to legitimately demand all that data without probable cause.

Bah. Forgive the length. Brevity takes time, but especially where you'd asked me not to declare you full of shit, I wanted to respond. I appreciate your response, even though I disagree with you.


Do you implicitly grant all parties willing to invest enough to break into the bank all the money you deposit in the bank?


A lot of people trusted Apple with their photos, and Apple did not release any photos, but that doesn't really matter because they were stolen.

A lot of people trusted Ashley Madison, and Ashley Madison did not release any data, but that doesn't really matter because it was also stolen.

Given that nobody has invented perfect security, it is reasonable to assume that any data anywhere can be stolen, and therefore you have no control over who accesses it.


Yeah that opt out theory was nice some years ago. Now it's not that simple anymore. Most of the times you just have no choice.

What right has Amazon to do this? They get their share from the fact that I buy stuff though them. So I pay and still get analyzed. There are many more services that work like that. You can get around some of them by spending much money or acquiring special skills that would allow you to run things on your own but all that will remain a solution for few. The majority is trapped because data mining became natural. The legend of "free for data" now just applies to everyone. It went so far that some advertise with the fact that they DONT data mine.

Privacy as a product.

Also it's not like this majority ever understood what they are saying yes to. You know that long awful text nobody reads.

The government looks very small to me if I consider all the other data mining parties currently active. Even with what Snowden toughed us. This fear that now somebody really connects the dots is a healthy one. Strangely the revelations did not lead to the same conclusions regarding the private sector. Good marketing.

My workaround to this: I teach people how to create and manage useful fake identities.


Not useful enough to be able to acquire a bank account, which requires a drivers license or social security number, which then requires a birth certificate.

And as it turns out, it seem acquiring any of these things for purposes of a false identity is illegal. So much for remaining anonymous online just because I'd like to be, not because I need to be.


Yeah...well. I was talking about normal people like my friends and family ;)


The government has this ability with near all other relationships you have. So much so that they have codified the few that it doesn't apply to. We put lawyers, doctors, clergy and spouses in a special category but others can be required to divulge any information they have on you with the appropriate order. (What's appropriate is another discussion)

Ultimately, realize that any dealing you have is open to the government if they so desire and make that part of your decision making process.


"The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized."

The US Government does _not_ have the legal authority to arbitrarily collect or seize data without a warrant specifying exactly where and what they're looking for.

>So much so that they have codified the few that it doesn't apply to.

There are a few exceptions on whom you can get warrants for, but we're not complaining about warrants with probable cause.

The complaint is that our "persons, houses, papers, and effects" are being violated without probable cause and without "particularly describing the place to be searched, and the persons or things to be seized".


The Fourth Amendment seems to be pretty merky when it comes to data about you belonging to and stored with other entities.


The "3rd party doctrine" is a bullshit interpretation and it needs to die. When I go through the trouble of coming up with a long complicated password and setup two factor authentication it's pretty obvious that I have "an expectation of privacy". Maybe what the tech companies need to do is have a checkmark on sign up that says "I expect my data to be private".


My take: The third party doctrine is a natural consequence of the use of "their" in the 4th amendment. People have a fourth amendment right to "their" persons, houses, papers, and effects. Bits on Google's hard drives aren't "theirs" they're Google's. You have no property right in those bits. If Google loses them, you can't sue them for negligence. If they change their TOS and monetize those bits, you have no recourse.[1]

It wouldn't even be a hard issue of the Supreme Court hadn't injected this "expectation of privacy" concept that appears nowhere in the text of the amendment.

[1] From Google's TOS: "When you upload, submit, store, send or receive content to or through our Services, you give Google (and those we work with) a worldwide license to use, host, store, reproduce, modify, create derivative works (such as those resulting from translations, adaptations or other changes we make so that your content works better with our Services), communicate, publish, publicly perform, publicly display and distribute such content."


Google's TOS negates your point. Google requires such a license to because those works belong to you. The license (which does not transfer ownership) gives Google the right to do all the things listed.

Of course, a lot information is actually generated by Google and even though it's about you (search history, etc) and doesn't belong to you.


It sounds like a joke, but would this work?


He clicked the check box... there's nothing we can do here images entire backup anyway


As a European citizen, I have a different view on this issue. While I do trust Google/Microsoft with some of my data, and I kind of trust my own government with my data, I don't trust the U.S. government for one bit, especially its judicial system.

Luckily, I don't normally need to account for that, and I'd rather keep it like that. But this is like somebody at the other end of the planet changing the small letters of my contract which I made with a local subsidiary of the American giant. The contract was between me and a local vendor, and suddenly we have the American government intervening?


I'm in the UK, no idea where you are. I don't trust my government one bit with my data, or any one elses. The US so called justice system literally scares the hell out of me. From a UK perspective, its terrifying. Unfortunately for UK subjects, the UK government willingly hands our data over to the US government, and equally willingly allows us to be extradited on little more than a request. Being a European Citizen offers us in the UK very little protection from the US.

As others have said, I have very little problem with the likes of google and MS having a lot of data on me, as long as they use it fairly and reasonably, and Im kept informed. Of course we do have the problem of government access to that data.


Out of curiosity, which aspect of the US leal system scares you so much? I was under the impression that UK has a rather similar legal system in many respects, and has a number of problems as well. Personally I am a little scared of the French and German legal systems. And last year I read a very scary article about the Swedish legal system on HN.

US is doing a lot of things wrong these days when it comes to privacy, but when compared to EU, I find it to be about the same.


By European standards prison sentences in the US are very very long. There is also a perception that in the US the system is loaded against people who don't have enough money to pay for a good lawyer.


I am not very familiar with penalties in EU, but I thought that on all the basics, like fraud, robbery, burglary, assault, murder, etc the penalties were comparable to US. Where US does go insane is when it comes to drug and gun crimes. The NRA lobbied for very stiff penalties for anyone who uses a gun in a crime, at least in CA. The drugs on the other hand are just a national obsession for people here in US, so I get that criticism. But on all the basics, I think EU and US are at parity. The Federal prosecutors are quite insane, but Obama has been trying to fix that, and many Federal judges are quite reasonable. But again, if US does need adjust something, it's the Federal system.

As for money and lawyers, US, at least in CA has excellent public defenders. I might be biased, because a few of my good friends are public defenders. But also the smartest man I ever meat, at least when it comes to law, was head Public Defender in San Diego. Public defender job in a major city is a very competitive position. That's my first hand experience in CA. Not sure about other states.


According to the prison documentaries I've seen EU penalties in general are much lower (like half to one third) than US ones for the named crimes. And we don't have death sentences over here so that changes perception a lot.


For instance this is robbery sentence guidelines in UK [1] and CA [2]. As you can see, penalties are very comparable. Robbery in CA is 2, 3, or 5 years or 3-9 years, depending on the degree, while in UK it's 2-7 years or 7 - 12 years, also depending on the degree. The first level of robbery in UK is more attune to Petty or Grand Theft in CA. In CA there is also a GBH enhancement, that will bump you up to the 12 years, just like in the UK.

Obviously, I have not done the comparison for all the crimes, but I think if done, we would find that EU and US both have very similar penalties for all the person crimes, like theft, robbery, rape, murder, etc. Where there is a big difference is probably in the crimes that have to do with national obsessions. For US it's drugs, guns, terrorism. For EU it's WW2 and holocaust, and also terrorism nowadays. But checking drug penalties in UK I also found them to be very similar to Federal statutes in US. [3] Though, if I had to guess, I would think the UK is far less likely to apply it's possession only laws. So, that's a valid criticism.

US does have an insane incarceration rate, but it's again due to our obsession with drugs. Take that out of the equation, and we are about even. Not that that makes it all ok, but I think we are on a path to changing that.

[1]http://www.cps.gov.uk/legal/s_to_u/sentencing_manual/robbery... [2]http://www.shouselaw.com/robbery.html [3]https://www.gov.uk/penalties-drug-possession-dealing


CA is not the whole of the US. It's one relatively liberal state. What about states in the midwest? What about federal crimes? What about three strikes legislation? What about CFAA? What about death sentences? What about felony murder?

The US may be fixing some of these things but for an idea of how the US system is viewed from over here right now check out the documentaries by Louis Theroux and the one entitled "The Farm -- Life on Angola Prison".


Exactly. The police can read all my letters if a judge allows it. They can tap my phone, bug my bedroom, track my movement, perform a cavity search, take my blood and DNA, detain me, and if I make really bad decisions even kill me.

And I am totally fine with that in principle, only not with a few somewhat minor details and missing checks and balances.

I want _my_ government to be able to do all that, according to the laws of _my_ country, which I am able to change and influence.


Yes, that is fine. I don't want _your_ government, which I can't elect to read my data at all however. A good example is say data mining for giving/not giving visas. It's a much more murky area without a typical due process - there is someone making a judgement call what is "more" or "less" likely, not proving anything beyond reasonable doubt. I would vastly prefer if I control what I give to _your_ government for dealing with this kind of process as opposed to what they can mine from random US companies about me (that I might not even have an account with - facebook knows a lot about me and who my friends are despite me not having an account there and not agreeing to any T&C at all).


>doctors

Might want to remove them from your list. Mandated reporting by therapists and doctors does more than most anything else I can think of to stop those most needing therapy from seeking it. From potential offenders who fear being outed to victims who fear their therapy records being made public at trial and used against them.


What you said may truly capture what's necessary- we need to codify additional relationships for the digital space that enjoy this level of protection, and codify them such that violations result in criminal cases being thrown out and/or reparations being paid! I envision this like the police force and falsifying evidence/respecting your rights- there's certainly some of all that happening, but police are eager to prove that they've done things correctly, because if evidence falsification or certain rights trampling happens, it result not only in internal probes and scrutiny, but dismissed potential convictions! The incentives generally line up.


If google makes a mistake when data-mining my e-mail, I may get an inappropriate ad displayed next to my e-mail.

It the authorities make a mistake when looking at my data, I may end up in jail or on the no fly list.


Agreed. It's not ironic at all. If you are in the business of data mining and some competitor comes along screwing up the data mining business for you (the government's actions and it's impact on PR), you'd be very interested fixing this problem as well.

Most generally didn't care about this until they became aware of the government's activities.


Hey man people always bring up this point, does anyone have MORE information about how that infrastructure works? I made a Facebook account for my class the other day, with nothing but my name and email. Now I'm 50 friends deep with old classmates // co-workers. Creepy. But HOW intrusive is this system?


Sure you can, and that behaviour is rather mainstream by now, but it wasn't long ago it was considered very egoistical.

Every time you install an app and you agree to upload your contact list to someone? That data was not yours to share.

Every time you put a little tracking script on your web page in exchange for traffic graphs it is your customers data you sell. Not your.

Every time you email me everything I write ends up indexed under my email address, to be sold to another company and cross referenced. I didn't agree to this.

That like button you put up that reports all your visitors behaviours? Not your data.


Except that companies generally reserve the right to modify their privacy policies at any time without your permission, only notification, and that's if they even follow them in the first place.


It seems like you're perfectly content for Googappazon to turn around and voluntarily sell "your" data to governments.

Given that it's really their data about you, they don't even need to disclose this in the TOS. The best you could hope for is for companies to loudly prevent themselves in their contracts. Of course they could end up going through commercial third parties, or any other plausibly justifiable loophole when they get serious about increasing earnings. Even if they wholeheartedly break your contract and you provably find out, what was your consideration actually worth - a few dollars? You'd be in an even weaker position to show damages than you are now with the NSA.

The ability to consent/decline is indeed a powerful distinction between corporations and government - I believe our world would be much better if every association were voluntary and we could opt out of things we found harmful. But juxtaposing governments/corporations and arguing against one by wholesale endorsing of the other, especially for something as slippery as data, is a Faustian bargain.


> It seems like you're perfectly content if Googappazon turns around and voluntarily sells "your" data to governments, since you've consented for them to use it.

Use it under the limited license you agreed to. Not to use it for any purpose at all. This is disingenuous rubbish and you know it.


I had elaborated why the distinction between the two is but a tiny epsilon in the second paragraph. I removed the second part of the sentence you quoted so perhaps people will continue reading rather than having a knee-jerk.

There is no "license" (in the sense of copyright) for the collected data, so there are no builtin penalies for violating the contract. Theoretically a company could commit to $10k in liquidated damages per deliberate privacy violation, but this could be done today so I'll consider it when I actually see such a term in a contract, especially for a free service.

Since you think it appropriate to dismiss me as deliberately writing "disingenuous rubbish", I would kindly ask you to elaborate what you believe my motive is for doing so.

There is certainly a powerful motive to defend private surveillance companies, even if it's just people wanting to believe in what they're creating.


You say that you don't want (your data) to be abused and you don't want to have problems because of that. Nobody wants that in any other case. It's not that special. Only the subject has changed to digital data on the internet.

(German guy here. My english could be bad.)


My general understanding of the difference here is that, while you're correct that we don't want any of our information- our mail, phone calls, movements, etc.- to be abused and give us problems, the issue here is that law enforcement DOES have a legal and legitimate need to look at all those things sometimes with a warrant, issued by a judge, who has reviewed the evidence collected so far and deemed there to be probable cause.

The situation we fear will come about instead is this one: law enforcement has access to all of our data, uses our data to find probable cause, present this to a judge, and arrest us.

It's upside down- law enforcement is supposed to be limited in what it can do until it 'picks up the scent' of criminal activity, after which it can use an expanded power set. If we give law enforcement a greatly expanded power set to use before it has 'picked up the scent' of criminal activity, we would foolish not to expect all kinds of abuse (especially given the nature of 'big data'-driven methods of 'picking up the scent').


I don't see the irony either. I believe the critical difference is that corporations aren't (or at worst case shouldn't) in the business of using data in a way that they can legitimize the use force against their customers. Unlike government.


If companies stored customer data encrypted by keys that are held by the customer, they wouldn't have this problem.

Furthermore, they wouldn't have to worry about deleting customer data either. The customer would have the power to simply deny access to the keys.


That works for file and message/mail contents, and there already are functional external solutions for that (f.e. GPG). I agree that needs to be better integrated and supported.

But Google needs to know my search query to be able to serve me a results page. Google will know my phones location if I use their WiFi-positioning-system. Google needs to know my address and payment information to have me buy apps.

Encryption can not help me here. I need to be able to trust Google not to collect that data without my consent, that they do not store that data longer than needed to provide me the service and that they do not share that data with anybody that has no right to it. That can only be guaranteed by laws.

This is about laws, and the need for such laws will not go away by encryption alone.


The larger issue is that you now have to push key management to the user, and the support problems that go with that. Key management is hard and painful. Telling a customer that they can't access their data on your service because they broke their laptop is going to make them very unhappy.

Otherwise, there'd be plenty of services competing with Drive, DropBox, etc, that did just that.


> push key management to the user

There is no other way. Unfortunately, there has been a serious lack of r&d in this area, so we have a lot of catching up to do. I believe it can be made to work, though, because this is not entirely a new idea for most people: they already understand physical keys and the problems associated with losing them.

Moving to digital (public-)keys isn't a perfect match, but it is entirely possible to leverage the skills and experience that people already have. For example, contrary to the advice used a decade ago, Schneier recommends[1] that people write down their passwords (avoiding the memorization problem). This can provide good enough security most of the time, because people already understand how to provide security for their wallet.

Incidentally, I always liked the basic idea of the Java Ring[2]. It was basically a fancy (for the time) smart card, but the idea of storing the chip in a ring is an attempt to make security management personal, which is an idea that had potential.

[1] https://www.schneier.com/blog/archives/2005/06/write_down_yo...

[2] http://www.javaworld.com/article/2076641/learn-java/an-intro...

edit: forgot a link


I generally agree with you, but there's one problem with the physical key analogy. The security of physical keys is weak enough that there's always a fallback if you lose all the copies of your key: you pay a locksmith to come pick the lock and rekey it.

You can't do this with digital keys because a digital key that is weak enough for this strategy to be usable is also too weak to protect you from the main class of attacks it's supposed to be protecting you from.


> there's always a fallback if you lose all the copies of your key: you pay a locksmith to come pick the lock and rekey it.

In the digital world you have Shamir's Secret Sharing. Generate a non-encrypting, non-signing, decode-only key and give a N-th of if to N escrow services. If you lose your key you contact M of those entities and recover the data.

Plenty of tricks have been created in the last two decades to address the problems associated with key ownership, the problem is always the lack of UI and of standardization in general.


Who says you need escrow services for that?

Store your N'th fragments with /friends and family/ that you know well. You might also be able to get in on distributed backups together.


Here comes Facebook Key Escrow?


These escrow services would be centralized targets too just like Microsoft, Google, Amazon, etc.


I think the idea is that you store only part of the key with each service, and you're the only one who knows which services have which parts--so, you're the only one who can reconstruct the key. Of course, the UI is still a problem.


Thanks. I did not know that.


Sure, as I mentioned, the physical key analogy isn't a perfect match. I'm jsut suggesting that there are some clear starting places where it should be possible to leverage existing skills, instead of starting from scratch. The details about a actually-usable product will be more complicated (and probably requires some experimentation).


Why not print the key?


This may actually be a very good idea for some cases.

If the stuff being protected by the key is, for example, the type of data that someone would otherwise keep in a personal fire-safe, then simply printing out the keys and keeping them in that fire-safe wouldn't change the type of security being provided.

It might be nice if we had some type of highly-reliable (like the redundancy in QR codes[1]), so the process of printing out a key and re-scanning it was easy and trustworthy.

[1] http://datagenetics.com/blog/november12013/index.html


You could just use a QR code. There's no real reason they need to be URLs.


I've tried this. Turns out that very few QR-code readers are able to cope with QR-codes large enough to hold a decently sized RSA key.


People do lose their physical keys though. And in pretty much all cases, there is a recovery mechanism to get at whatever the physical keys are protecting. This may involve drilling a hole or the like, but there is a fallback. With any digital crypto worth using, there is no recovery mechanism.


> There is no other way.

Well, we can always legislate. Even with encryption of data, you want legislation. Without legislation, the Government will just create a separate channel for data collection to bypass your encryption. It starts unencrypted at the remote site, so it's not that hard to do.

Even with legislation there will be problems, and overstepping, but at least then there's some semblance of recourse and ways to fight. You can't fix the problem for everyone and always, but you can fix it for most people most the time. This is the sort of thing that should be in trade agreements and treaties.


I guess that Ring can be replaced by a Smartphone or a Smartwatch.


Telling a customer that they can't access their data on your service because they broke their laptop is going to make them very unhappy.

Depending on your target market, that can be a selling point. Tell them early and tell them often that you don't keep copies of their keys and if they lose them, they won't be able to access the information. Remind them to make backups.


If the data is opaque to the provider, then there are many services they won't be able to provide on it (without some major advances in homomorphic encryption). For example, spam detection or search. Sharing is also made much more difficult.


When I submit a search, I can also submit the key.

Spam filtering can be done as the email streams in. Email databases which power the spam filter can be de-anonymize do and reduced to statistics.

I guess sharing is harder but it can still be done. If I share content with you, I'm giving you permission to have a copy so it can be encrypted with your key as well as mine.


There is no reason that stuff can't be done on the client side.

Outlook works just peachy searching my GB's of emails and attachments from its local replica, using the internet only to sync.


Not true - there are reasons not to do it on the client instead. Consider a web email client instead. It's not feasible to login to the site, download a full archive of your email, decrypt it, and index it before being able to do a search.


IndexedDB[1] sort of makes that a non-issue, doesn't it?

[1]: https://developer.mozilla.org/en-US/docs/Web/API/IndexedDB_A...


I don't want GBs of emails on my phone, desktop, laptop, wife's laptop, work PC, and anywhere else that I want to use email.

And I don't want to waste power and time syncing all that data between those devices just so i can do the same things i can now but less efficient (and with still more drawbacks)


No, it doesn't. This issue is not technology to store and search data from the client. The point is not wanting to download everything to every client in the first place.


Personally, I am a fan of p2p applications and doing things locally. Unfortunately, there are a lot of unsolved issues, especially when it comes to data storage, like you mentioned.


This is one of the reasons that web email clients (as your primary means of access, rather than a backup) were a bad idea.


Outlook also downloads and stores the emails locally, if the emails isn't in the PST the search takes ages and if you don't use recent versions of Exchange with all their bells and whistles and full indexing then finding anything is near impossible (e.g. searching within attachments).

No one will use Gmail if they need to download GB's of Junk they haven't cleared in 10 years to find an email for 2 months ago, not to mention that any type of sharing will force major duplication as well as require you to download and generate new copies with each person you want to share it with.


I'm not saying you're wrong, but how would you go about implementing client-side search for Amazon?

(inb4 "server-side amazon search doesn't work anyway" ;)


Also displaying advertising based on keywords in your email.


Tarsnap (a service that acts like your description) is quite nice, but the problem of "customer lost the keys" is a very real one. I love the idea, but someone really needs to come up with some decent key management (and offline backup and key migration) to make this consumer friendly.


It's not really something that can be made "consumer friendly" as far as I can tell (But I'd love to be proven wrong!). How do you securely store the key to your securely stored data?

Admittedly, there are some recommendations on the Tarsnap site (http://www.tarsnap.com/gettingstarted.html):

"5. Keep your key file safe

Store your /root/tarsnap.key somewhere safe. If you lose tarsnap.key, you will not be able to access your archived data. The same encryption which ensures your data security also means that there is no way for anybody (including Tarsnap Backup Inc.) to restore your data without this key. There are many ways to keep it safe: Copy it to a different system. Put it on a USB disk. Give it to a friend. Print it out (it is printable text). Store it in a bank vault. Pick at least one and do it!"

The only other way I can think of to keep the key even safer would be to use Shamir's Secret Sharing to divide the key up and share it among trusted friends and family. (https://en.wikipedia.org/wiki/Shamir%27s_Secret_Sharing).


The above is exactly correct, though: Write it down, put it in a safety deposit box (or two if you're paranoid about losing it?). Note that fireproof safes normally won't protect digital media (temperature) but paper survives well (I hear).


Then, we need a good service to store keys, with a key to get to your key


I wrote a PoC app [0] a while ago to do just this - allow storing and sharing of keys, with the server holding the encrypted keys never seeing the decryption key.

Sharing a key involves downloading it from the server, asymetrically encrypting it with the public key of the reciever, and uploading to the server again. It also allowed sharing with groups via intermediate keys decrypt-able by multiple users.

Obviously it's just a PoC - noone should use it for actually important data, and I certainly don't promise that it's actually secure - but it does (more or less) work. As it requires client-side crypto, it's not an ideal candidate for a web-service either (although one could potentially use browser extensions to make that more reasonable).

0: https://github.com/jfindley/skds


And that will be a sweet, sweet target...


Can't this just be my phone?


And when your phone breaks...?


>decent key management

What about Hashicorp's Vault?


Right, and how would that be implemented, in, say, a web browser? Do you really think that the government that's forcing all this secret spying would just give up? Or more likely, would they require modifications to the software to leak the key or plaintext?

How would users verify that the software was working securely? A webmail provider would need to publish a full client-server API and protocol spec, so that you could run some independent software to audit what was being sent to be sure no leakage was happening. This is simply untenable.

Oh and what would really happen is that everyone would lose access to their data since, in general, nobody is able to maintain keys. It's pretty understandable that companies have little desire to redesign their whole systems to provide a crappy experience to a nearly non-existent market segment.


But then you can't leverage customer data to sell to advertisers. The problem is, you are the product for most internet companies.

Think of a Cow in a barn. The food/water/shelter is free, because you are what's to be sold.


If by "you", you mean aggregated data about you in lieu of cash payment.

Google/Facebook/etc are not selling your data, they are not selling you, they are selling access to you via advertisers, more specifically, they are selling access to people who like certain things and who live in certain areas who have certain demographic profiles.

If you're curious about this, go sign up for an Adwords account and see how the targeting works - what you see there is the limit of what any advertiser using Google sees about its users. Facebook is much the same.

Get it right. Any ad-backed company of note does not "sell your data" and characterizing it as such is so divorced from reality as to be a blatant, willful lie.


Not only would "selling your data" as opposed to access to you or a per click basis be a less profitable business model, it'd also be strictly illegal in many jurisdictions without explicit opt-in permission.


In those countries that it is illegal to sell personal data, what kind of abuse are they intending to prevent? Any abuse which is possible by having direct access to private data is also possible to do by going through a intermediate which does the data mining for you.


If I've opted out of having my details passed on to third parties, I only have to opt out of receiving communications once.

And since they're also not allowed to give the data away and can get into trouble for allowing it to be stolen or accidentally shared, my name and personal details are also somewhat less likely to crop up in an embarrassing public "people who subscribe to this service" list...


Do you mean like Ashley Madison hack?

Nothing in the ad-backed company model prevent someone sending a fake advertisement to the demographic who are likely involved in adultery. After that you publish a "people who subscribe to this service are ..." list based on who clicked on it.

The only question if someone could do this with Facebook/Google is if they provides targeted advertisement to that demographic or if you have to provide your own algorithm and combine different targeted advertisements with the available demographics.


There is no way whatsoever that you're getting my name or email address by some third party serving your ad to me, unless I happen to end up voluntarily filling those details in on your web property. Which I probably won't do, unless you have something I actually want, and I trust you. That's the difference. That and the fact that if you publish your "the small fraction of people on $EMBARRASSINGTHIRDPARTYSERVICE's mailing list which clicked through to my landing page and filled out my webform" list without me having opted to allow you to share the information you'll find yourself in a courtroom.


You assume the false advertisement would be done lawfully. People who hack places to publish list of embarrassing information don't do so legally, so why should we assume that people who abuse data mining services would only be lawful entities?

The false advertisement might say something like "save 50% of your next amazon purchase" or "try out our new car service by getting the first trip free" or any other ways of getting people to voluntarily filling in information which looks completely innocent. All you need is the additional information of name and address, which for the user feels completely separate from the sensitive information which was surreptitiously inferred through the targeted advertisement.

This assuming they can't simply push some malware and get the name that way. We constantly hear about tor hidden service attacks done by de-annonymize people with flash/java/browser exploits.


It seems a little unusual to question the merits of data protection law on the basis that sometimes people can circumvent it by committing crimes!

Do you not realise how low response rates to ads and promotional mailshots are? Try as you may, you cannot possibly hope to obtain anything more than a tiny fraction of the data held by a third party by advertising your phishing page on their network, even if the third party actively supports your goals. And even if you get some useful personal information, if you publish it without consent, you've committed a crime.

Banning the sharing/selling of data without consent imposes significant obstacles to legal and illegal hassling of people whose data has been mined.


[deleted]


Any inferring is being done by the ad company, not the advertiser.

And framing this as discrimination (a loaded term nowadays) is just more hyperbole, IMO. People of different races, orientations, religions etc are very likely to view different things as relevant, and at the end of the day, relevance is what any advertising company's mission in life is.

Best of all, it's all supposed to be data driven, so those nasty human biases don't even creep in. If demographic X clicks on a thing more than demographic Y, at the end of the day, that is what matters, and we can sidestep any discussions of bias, regardless if the demographic under consideration is young/old, black/white, atheist/religious, rich/poor, male/female.


This meme got old faster than I expected.

I am the only one that is aware of symbiosis?


>symbiosis

It exists in 3 forms.

1) Both benefit (I.E.: mutualistic)

2) Only 1 benefits while the other has no clue of its usage (I.E.: commensalism). Vultures and Lions more or less, or hermit crabs and shell fish which are dead.

3) One actively benefits while the other is consumed by the active particpant (I.E.: Paracitism).

:.:.:

Your argument is that webscale logging is mutualistic. To an extent I'll agree. But this isn't the case. Google is mutualistic when using say GWallet, Docs, Mail, search etc.

But Like/Plus buttons, as well as cookie sniffing isn't mutualistic. Its commensalism. I have no say if another organism uses what I leave behind, it doesn't directly effect me, but it benefits another organism.


Well it doesn't seem inaccurate to suggest that livestock has a symbiotic relationship with humans. Yes we do slaughter them for food but think of the benefits during the cows life!


I give you points for that.

Still arguing that life in Googles and lately Microsofts barn is quite comfortable and a whole lot a safer than outside because those companies has a huge interest on securing their users


Security is one of the hardest things to decentralize. Would you rather trust Google or a rinkydink website with your data? They may not have even installed the latest patches of their open source software!

Bitcoin decentralized money.

Until now, social has not been decentralized effectively.

Security is even harder, perhaps! Because it requires authentication but also a clear definition of guarantees, including availability! Perhaps your data could be stored on your computer and other replicas can be stored encrypted in the network, like freenet, but encrypted only by you. And your key would be a hash of your passphrase like "all horses are fine". If that guy's bitcoin key had been a result of a hashed passphrase, he wouldn't have lost $3 million in the garbage!


What's crazy is that it often just isn't an option.

Precisely for some of the reasons outlined in the above article (third party records are subject to subpoena pretty much everywhere, not just the U.S.) it's untenable in certain fields to keep sensitive information on the cloud.

Those concerns would be dramatically mitigated if at least there was an option to do client-side encryption. That way the government has to get jurisdiction over your person to get you to turn over the encryption keys, which puts you on much stronger legal footing.


But then a lot of things wouldn't be able to exist.

Siri needs collective anonymous data to work and improve itself. Ads, like them or not, cannot function without large collections of data. No attempt to predict customer behaviour is possible, no usage analytics, no cutting edge improvements, a lot of cool things wouldn't be possible.

It's just how it is.


>> If companies stored customer data encrypted by keys that are held by the customer, they wouldn't have this problem.

There is no value to the company in doing that. They can't mine the communications to determine what ads to serve you, or anything else. At that point they'd just be offering a free service. Why?


Sure they can. They want to run a job, they can get the key from me and run it. This can be transparent to me; eg my phone could transmit them the key every day until I decide to stop.

So many people are saying "your proposal breaks cloud services or requires client side compute", but that's not true. Be more imaginative and get outside the current model.


They also wouldn't make any money from advertising and the government would pretty soon outlaw the practice.


Some actors in the US government have been trying to outlaw consumer cryptography for decades, and so far they haven't succeeded. See, e.g., the Clipper Chip battle from 1994.


Exactly, but that's because so few people are actually using end to end cryptography. Now imagine the internet giants had petabytes of encrypted data in their data centers. Apart from the fact that it's completely impractical because that data can't be used for anything other than downloading, it would give a huge boost to those in the government (or governments actually) who want to outlaw it.

From a somewhat elitist perspective I actually prefer the status quo. At least, today, we have backup services like SpiderOak or tarsnap and we can use encrypted email (I don't) or other communications software.


Agreeing "for once"?

Hardly. Apple and Microsoft are already on record as agreeing to fix wages, creating a cartel affecting a million tech workers.

https://pando.com/2014/03/22/revealed-apple-and-googles-wage...


You mean Apple and Google.


I mean Apple, Microsoft, Google, et al.


And because tech workers see unions as evil....


I'm a 10x unicorn making 1x salary. I don't need a union.


Heh.

While OP was talking about salary setting as cartel behavior, what also comes to my mind is the video game industry. Devs/artists are so fucking exploited there.

There's a reason production crew on multi-million-dollar hit-driven products are usually unionized (see also: film).


Microsoft has shown that they are quite willing to access induviduals private data if they have a financial stake in it [0]. Yes, they eventually backtracked under public pressure (after trying very hard to justify how it's totally okay because they were going to pay a lawyer to rubber-stamp things in the future), but it's rather hard to listen to their general council talking about how they value privacy on principle given their history. It's quite obvious they only care about privacy insofar as it affects their bottom line.

The article also conflates (intentionally?) this issue with the mass-surveillance issue, bringing Snowden into it and insinuating that this ruling would have an effect on that, which is just silly [1].

The whole "Company F" section is interesting (hadn't heard before that microsoft is challenging the statement that they were willingly providing user data to the NSA), but it's a bit hard to square with the leaked documents which list microsoft as the first participating partner in the PRISM program [2]

[0] http://www.geekwire.com/2014/microsoft-defends-hotmail-snoop... [1] http://www.cbsnews.com/news/patriot-act-can-obtain-data-in-e... [2] http://www.motherjones.com/politics/2013/09/nsa-timeline-sur...


Their statement says

In this case, there was a thorough review by a legal team separate from the investigating team and strong evidence of a criminal act that met a standard comparable to that required to obtain a legal order to search other sites.

I don't think that supports your claim that they would only ask a lawyer in the future.


> Their statement says

> In this case, there was a thorough review by a legal team separate from the investigating team and strong evidence of a criminal act that met a standard comparable to that required to obtain a legal order to search other sites.

>I don't think that supports your claim that they would only ask a lawyer in the future.

From the link I included in my comment, Microsoft deputy general counsel John Frank is quoted: "As a new and additional step, we will then submit this evidence to an outside attorney who is a former federal judge. We will conduct such a search only if this former judge similarly concludes that there is evidence sufficient for a court order."

This lawyer (his past employment as a judge has no bearing here) would have been paid by Microsoft.

Could you explain how that does not fit with my earlier comment?

And to preempt the inevitable, yes, as I said in the original comment they eventually backtracked on this and said they'd report such future crimes to the police. You know, what they should have done in the beginning, and would have done if they had the respect for privacy-on-principle that they are now trying to shower themselves in.


"they were going to pay a lawyer to rubber-stamp things in the future" carries the implication that they hadn't asked a lawyer this time. Asking a "legal team" is asking a lawyer.

They decided to ask even more lawyers, at least one who is outside and a former judge, they didn't decide to ask one lawyer as opposed to zero before.

I'm also not so sure you can draw broad conclusions about the company from this particular policy; "we will only search an email for someone else after a court order, and we may search ourselves if we believe there's enough evidence for a court order (we being lawyers)." I don't see much difference between the old and new policies; the main thing is that someone outside is asked.


Congratulations, you win the nit-picking contest. My point was that the advice of a lawyer changes nothing. Violating users privacy should be reserved for law enforcement with court orders.


How should Microsoft go about asking a court to search their own servers?

Also, you implied you were fine with the new policy and that it was a significant change; claiming your point was the opposite at this point means you were very unclear. How should I have figured out what you meant from your words above?


> How should Microsoft go about asking a court to search their own servers?

The people pressing charges do not apply for court orders. The police does so in cases where they believe it is needed for an active investigation. Microsoft should then have a policy of not disclosing information to law enforcement unless provided with a court order.

The fact that microsoft ended up stating that they would do exactly this in the future should indicate that this is not the problem you are making it out to be.

> Also, you implied you were fine with the new policy and that it was a significant change; claiming your point was the opposite at this point means you were very unclear. How should I have figured out what you meant from your words above?

By reading the post with a mind to reading the likely meaning, rather than focusing on finding unlikely meanings. As if you were a human communicating with another human, rather than a tokenizer reading source code. If you did that, you would see that my point has been consistent the entire time.

If you read the totality of the post, the point is quite clear, and I've restated it a few times now.

I assume this is clear now? Because if not, I'm starting to suspect a troll and am not interested in furthering this conversation.


>Microsoft should then have a policy of not disclosing information to law enforcement unless provided with a court order.

According to the statement, that was their policy.

>The fact that microsoft ended up stating that they would do exactly this in the future should indicate that this is not the problem you are making it out to be.

The steps they said they were adding in the statement don't include going to the police. I'm not sure what you mean by this statement that could be correct.

I went over your comments again, and they contradict each other, let alone the facts.

as I said in the original comment they eventually backtracked on this and said they'd report such future crimes to the police.

Whereas your original comment doesn't mention the police at all.

I get that you feel you've been consistent, but you haven't communicated your actual thoughts very well. The other possibility here is that you don't have a firm grasp on what exactly you think Microsoft did wrong and should be doing instead. It certainly doesn't come across in your writing.


Is there any evidence that any companies were willingly going along with PRISM? Seems like they were under gag orders and potential fines (Yahoo said $250K a day). Not sure what choice they had.

Though MS accessing user's data via Hotmail for internal review really does scuttle their credibility.


> Is there any evidence that any companies were willingly going along with PRISM? Seems like they were under gag orders and potential fines (Yahoo said $250K a day). Not sure what choice they had.

I'm not aware of any evidence of their intent, no, just that they assisted in collection. I believe Glenn Greenwald has stated that they were a willing and proactive participant, but that that information came from NSA internal forums and not official documentation so it would not be released.

I don't think it really matters though. Whether they were willing or not, the end result is the same. If you don't trust the USG, you cannot trust US corporations (yes, of course this applies not just to the US but others as well; it's just that we currently have more information about the fiveeyes programs, and the US is a big player geopolitically so they have interests to protect in a lot of areas).

On an more personal level, I would say that even if they collaborated with this under legal threats, they are still morally responsible for their decisions to support the surveillance state. I cannot accept that we give people or companies a clean slate for evil just because they had financial interests to protect. Everyone has financial interests to protect, and it's usually more convenient to ignore evil than to put oneself at risk.


I would say that even if they collaborated with this under legal threats, they are still morally responsible for their decisions to support the surveillance state

Perhaps, but at some point they have to be excused if they were complying with the law. It's our jobs as citizens to ensure we have the right laws. In a way, it's kind of like blaming the soldiers for the 2003 Iraq war. I don't blame them. I want them to fight their guts out believing that they are answering their highest calling (otherwise they lessen their odds of surviving it).

But I want the sonofabitch who sent them there and caused the deaths of all those innocent Iraqis to go to jail.


You gain a lot: connections, contracts, insider information and special privilege if you play nice with the government.

If it's a secret program, what do they really have to lose?


I am not a lawyer, but it seems to me that an American court has the power to demand that an American citizen produce an item or information under his control, even if it happens to be in another country (e.g., a man getting divorced can't drive his car and all his gold and jewelry into Canada to shield them from his ex-wife). I imagine that most other countries would behave similarly: being within their borders and subject to their jurisdiction, they can compel someone to do something.

If that's indeed the case, then it seems that an American corporation—a legal person with a presence in the United States—may be compelled by a court to produce items or data it controls outside of our borders.

The thing we need to do is to limit the power of the subpoena generally.


> it seems to me that an American court has the power to demand that an American citizen produce an item or information under his control, even if it happens to be in another country

Does the American government have the power to compel someone to violate foreign law in order to produce an item held in a foreign country? Can the American government force an American citizen to violate Greek law and take a million Euros out of Greece for that hypothetical divorce settlement?

This is the scenario in question here. It is illegal in Ireland for Microsoft to provide the requested information to the US Government. The data resides in Ireland so complying would require performing an illegal act in Ireland. Does US law for some reason override Irish law? Can the US government compel Microsoft to violate Irish law?


> Does US law for some reason override Irish law?

It's pretty simple. Whoever has more guns backing up their laws wins.


You are not a lawyer, if some one does it they aren't forced to drive the car or jewelry back they might be sued for hiding assets in a civil court, this isn't the government suing people, divorce isn't a criminal procedure. In any case Corporations aren't people, and MSFT servers in Ireland probably belong to MSFT IE/EU which is a separate legal entity (which is why MSFT can say we don't have it). There is also a difference between a warrant an, order and a subpoena in this case and 1000's other issues that I also as some one who's not a lawyer can't even begin to imagine.

But you know who is a lawyer, MSFT's head legal council and i hear he also got a bunch others with him so i would think they would figure out how to handle this. But in general various agreements and treaties govern how law enforcement agencies and legal systems of different countries interact, some countries might accept a signed US court order (Canada for example), for others you will have to go through the extradition process codified in a treaty, and the list goes on and on and on.

*P.S. With several exception a court can't compel you to do anything if you violate a Subpoena you will be found in contempt and maybe charged for that, and while you can call that compelling It's not since the court will not actually force you to do anything.


This gets better. See, Microsoft has an entity in Estonia. Now, how about an Estonian court compelling Microsoft Estonia to provide Obama's private email? Would anyone even treat such a thing seriously?


It gets more interesting where the American companies own subsidiaries incorporated in other countries. Can a US court compel Apple's Irish subsidiary (for instance) to release data?


I hear you're not a lawyer.


Of course this appeal will fail. The US believes that it has jurisdiction essentially everywhere. One need only look at the FCPA (Foreign Corrupt Practices Act) to see this firmly held belief in action. The US Government is increasingly using it [1] to prosecute people and companies it simply doesn't like by punishing conduct that occurs outside the US that in many cases has no effect on any US citizen or company.

That said, if you're a high level drug trafficker, and you're not at least using PGP, you deserve whatever you get. This is Darwinism at work. Even normal, non-criminals should be using the strongest encryption they can get their hands on, because no one knows what kind of conduct the government will seek to punish going forward. Prosecutors get more creative every day in applying our extremely broad laws to increasingly wide swaths of behavior.

[1] http://www.fcpaprofessor.com/a-focus-on-doj-fcpa-individual-...


It's important to remember that this is the same company that snooped through the emails and files of one of their users while looking for evidence of piracy. They came clean about their snooping moments before court documents were publicly released that detailed what they did.


The pending TISA trade treaty may limit data sovereignty, http://www.zdnet.com/article/wikileaks-leak-shows-data-sover...

"50 countries including Australia and the US may be signing away rights to ensure sensitive customer data remains in its country of origin ... the draft document reveals that the United States and the European Union are pushing to prevent signatory countries from preventing the transfer of data across nation borders."


It's good to know there are people like Brad Smith standing up to government demands for full access to people's data. It brings up an interesting privacy contradiction. While storing data locally seems best for privacy, if it's on a networked computer, there are still ways for people to get it, and unless you have really good lawyers, nobody is going to challenge governments across the world if they want to access it. By moving data to the cloud, we are creating incentives for companies like Microsoft to fight against government intrusion.


It's great this article about Microsoft fighting for our privacy came put in the midst of the upset over Windows 10 phoning home.


Microsoft aren't fighting for your rights here. They are fighting for market share. Not turning your data over to the FBI is now a product feature.


I can't help but disagree -- they are caught between a rock and a hard place. Following the demands of one Federal government (Brazil) will result in them violating the laws of another (US).

They are fighting for a unified legal regime that spans across all international boundaries.


And god damn would I be willing to pay top dollar for such a feature. Do you hear me $every_tech_company? I'll be more than willing to put my market demand where my mouth is, give me client side encryption and/or self-hosted versions of your products and I'll happily write the check.


Which in turn is protecting our privacy. The same can also be said about all other corporations: Google, Mozilla etc.


It's kind of a red herring to connect these two. The case here involves the government seizing data from a foreign data center. Were Microsoft to lose, this says to foreign customers that any time the US government wants some information stored overseas, they will get it. Obviously not good for business.

Here in the states, Microsoft is going to collect everything it can about its customers and store it (and from there I imagine it eventually goes on in some form to the NSA).


So your argument is that people and organizations worried about having their data seized in the US is not bad for business?

Most individuals didn't really care too much when the NSA documents were released but businesses sure did. Our security and privacy pitch to the management was bumped from, "would be nice someday" to "make it happen."


> Microsoft has lost twice

No kidding, they are trying to convince the US to recognize that they actually have limits on their power outside of the country.


Not to change the subject but why does this sound like Gibson Guitar all over again? National law vs External laws = sovereignty?


Interesting side note... the author must have originally called this something along the lines of: "As Microsoft takes on the feds, Apple and Amazon watch nervously" or something like that, since the link to the article is "as-microsoft-takes-on-the-feds-apple-and-amazon-watch-nervously".


If they lose, the solution is of course to re-incorporate outside of the US.


Is that a working solution? Does the US government claim jurisdiction over US corporations or over corporations operating in the US?


Microsoft, Apple and Amazon can probably afford to buy themselves their own nice little Caribbean tax paradise and write their own laws...


this is sad. for all the misguided hate against the US there is a lot of very justified hate that comes from these sorts of attitudes coming from its government and enforcement agencies. they should have more respect for the laws of other countries, especially somewhere like Ireland which could, not unreasonably, be called a crime free paradise compared to the US.

its terrifying when law enforcement doesn't understand the difference between right and wrong...


First thing that came to my mind when I read the headline: "Google has to go."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: