And if you think GDPR is a toothless joke, let's take a look at the defined fine stucture.
It is pretty simple, only 3 levels (strikes for the fellow Americans):
Strike 1 - Stern warning letter
Strike 2 - 2% of your TOTAL GLOBAL REVENUE
Strike 3 - 4% of your TOTAL GLOBAL REVENUE (or 20mil EUR, whichever is higher)
And now you know why GDPR is a board level topic. Keep in mind that the EU/US Safe Harbor agreement got axed due to a lawsuit of a single student from Vienna against Facebook. So all you need is a single pissed off German customer you ignored when asking for their data report card and you're fucked.
For startups - GDPR is like Y2K at the time, a GOLDMINE. So much opportunity to sell solutions, from real to snake oil. GDPR compliance is already and will continue to trigger a massive wave of investment.
I have national sales responsibilities for one of the majors. Think IBM/Microsoft/Oracle/etc leading a sales team of 74 reps.
You'd be surprised at how LITTLE sales we've generated from GDPR. We've been providing free GDPR assessments for the past 1.5 years for over 200 accounts as lead gen opportunity and very little sales have resulted.
It all boils down to companies simply don't believe the fines will be enforced given just how expensive the fines are.
And since GDPR doesn't go into affect until May 2018, companies are just waiting and seeing what happens.
It's really hard to sell GDPR because it's essentially an insurance policy. Why spend $5m on software and another $5m in services ($10m combined) if your total fine is only $20m. Do you as a company have a 50% chance of getting fined? If not, then roll the dice and not buy a solution.
Speaking as an eng for MSFT, across multiple orgs, GDPR is certainly taken very seriously here.(Probably safe for me to say given [1]) I would expect the big players will all follow through as a CYA because they'd be the first to be made an example of. I think a sister post's comment on lack of sales was probably accurate, since my above reasoning likely doesn't apply to "most small companies" and the effort to comply properly is certainly not negligible, even if you've already assessed the changes that need to be made (frankly that seems like it might be the easy part if you can leverage someone with background on the nitty gritty of the legislation). It will be "interesting" to see how this pans out.
The talk on GDPR has definitely only really taken off very recently. While a year ago it was a few people that brought it up, the reality of the situation is now slowly kicking in as European customers are asking for this.
The fine doesn't absolve you of responsibility for complying. If you're fined you have to pay up AND you have to comply. Otherwise they'll just fine you again, as they did to Google.
Nevertheless, a 4% fine is very low, given the low frequency of fining. Tech firm margins are much larger than this; so while it's clearly unethical to do so, it may be more profitable to simply accept the fines as a kind of tax for as long as possible, and to continue to profit from all that data until things get really dire. In actuality; a firm wouldn't need to choose quite so starkly to flaunt the law; simply failing to invest and dragging your feet looking for impossible have-it-all solutions might well be enough to get away with a few fines until you really try to get your act together.
If you will; it's the difference between the VW approach and those of (as it appears anyhow) all the other carmakers. They're all cheating; most simply were wise enough to avoid doing so explicitly.
Data protection is also harder to enforce than emissions; and just look at how laughably incompetent emissions enforcement is to get an idea of how seriously you're likely to get caught if you happen to collect too much private information.
I expect the same here as in emissions: no real compliance for years (if not decades), and when enforcement comes, it won't be the regulator that actually catches even egregious wrong-doing. I mean; the high-profile players will play lip-service of course, but that's it.
I'm not so sure. And it's not just ad-targetting - all kinds of personalized stuff and simply general purpose data mining suffer too. And don't forget that they wouldn't get the full 4% immediately; and would likely be fined much less than once per year based on current trends anyhow. So that 4% is going to be further diluted.
That's not what has happened so far. The search Engine Results fine of €2.4bn was based on the length and severity of past infringement and they were threatened with a $10m per day fine, equivalent to 5% of global revenue, on an ongoing basis if they didn't comply within 90 days. So they absolutely have been hit with a heavy lump sum fine from day one.
There's no need to theorise about how the EU might enforce such laws, we've got actual examples of them enforcing laws like this already and they do not mess around.
Google's revenue is all about data collection. If they can't collect lots of data, the whole business model is a lot more questionable. In the face of that, 2.4bn once is a trivial fine; consider that that's something like what... 3% of their revenue in one year?
Of course they'll try to avoid that in the future, but the fine is mild enough that it's not going to cause firms to err on the side of caution. They're going to look for the absolute edge of the law.
Frankly, if google had not leveraged their search "monopoly" (not quite a monopoly), I suspect their market cap would have been more than 2.4bn lower; so this was a pure win - especially since conviction and detection aren't a slam dunk.
I read that: the point is that 2.4bn just isn't all that much given what it does to the value of the company. It's probably a risk worth taking as long as you can get away with it. And yes; that means you'll need to eventually adapt - not because 10m a day is necessarily enough to actually enforce that, but also because this kind of stuff is gamable; complying with the ruling without much risk of competition at this point is pretty easy. And you'd need to make the calculation that even if 10m a day were acceptable for the gain, simply ignoring high-profile judgments against you may have worse ramifications down the line.
I'm not saying it's nothing: it's that it's a risk worth taking given the gains. If you're building a trillion dollar company (i.e. google), then eliminating competition or accepting some judicial friction as a way to establish dominance in your (data-mining) field is perhaps acceptable or even wise.
In that, these fines simply aren't punitive enough, especially since they come so late. And again - it's not black and white. The existence of such rules will alter behavior; it's just a question of whether the reaction will be legal mitigation tactics, a company-wide change in approach, or something in between.
Put it this way: if you can corner a market worth trillions, risking how much loss is acceptable to reduce or eliminate competition? I'd venture that these fines are at least one order of magnitude too small to be really frightening (which isn't to say that the behavior google was convicted for deserves that amount, simply that anything less than that means that law can't really be enforced)
Yeah, both my work and my wife's companies are taking it very seriously. The point is at the moment, we all think we can handle it internally: legal is very busy doing prep work that will result in requirements across the business, some of which will result in development work (I'm in the IT department). My wife is busy documenting and preparing requirements too.
What kind of companies do you sell to? Maybe they are actually trying to handle it internally too? What do they say?
I wish more penalties were like this. This sounds great. Now these companies will finally have real incentive to comply. Hell, the percentages should be higher. That's the only way to enforce regulations. Otherwise, they'll just pay a puny fine, American style, and not do shit.
It is. On the other hand, companies like Google/Facebook/MS/Apple have such huge profit margins that a conviction will be but a speed bump, and given that their business model is based on the data regulated by GDPR to such a great extent, one may expect them to challenge the new regulations as much as they can. They also (probably) have the most competent legal divisions, which will help them exploit any gray areas.
For companies with lower profit margins or companies that are hit by the lower limit of $20M euro, the threat is much greater. Small companies may go immediately bancrupt from a $20M euro fine, while companies with slimmer margins may be taken from black to red results by a 4% fine.
From my pesonal experience, many such companies do not fully grasp to what extent these regulations will affect their business models if actually enforced.
Yeah, but 4%/20m is only the third strike. If as a small company you're unwilling or unable to fix those issues in a reasonable time frame after the first two strikes, you might not deserve to survive as a business in the first place.
I've never understood what would happen if Google or Facebook just ignored EU regulations, claimed that they were not under the EU's jurisdiction, and ignored any judgement. Would the EU erect a great firewall at that point?
Google has assets within the EU which means ignoring EU laws is not an option, unless Google are prepared to forfeit those assets. It's not really a decision they have the authority to make anyhow. Imagine the shareholder lawsuits!
They are the owners of a company that ignores EU law. Good luck for the board or the major shareholders doing business in the EU again when they allowed stuff like this on their watch.
Plus, they need the cooperation of governments. Their patents are just pieces of paper without governments enforcing them, for example, and their data centers need special power solutions. Governments don't trust companies that ignore government regulation.
And, even if they hope that the US government is more lax with their restrictions and will turn a blind eye, the US is very heavily dependent on the OECD working, and not helping the EU prosecute blatant criminals would be a big problem for American credibility.
This is not even getting into the fact that the EU is a much larger market than the American market is, too.
In practice, just pretending regulations doesn't exist (depending on the regulations in question, of course) can be stupidly expensive for the company in question. Share holders don't like stuff that's risky and expensive and would replace a CEO that mad.
THAT is the harm. That's my point. You can't sue someone for something that abstractly maybe implies something else. You sue for damages and have to show damages.
If the CEO does something that drastic, the share holders can ansolutely sue him or her because the CEO doesn't have the authority. You reckon the owner of a bus company won't fire and sue a bus driver who does something stupid that could endanger many lives?
The key point is; it's not the CEO's company. He works for a pay check, the share holders invest for long term profits. Big difference.
The US happily enforces its own legislation outside of its territory (particularly in the financial world). By almost every single metric, the EU is bigger than the US. You could argue the EU is merely following the US footsteps.
That's far from accurate. The U.K. currently represents around 15% of the EU GDP, which was around 15% higher than that of the US on its own (that includes Norway and Switzerland which are not member but allow me to pencil them in)
If by substantially smaller you mean 'pretty much the same size' then yes you're right.
The EU can exert as much force as it likes on any assets within the EU and as exmicrosoldier pointed out (https://news.ycombinator.com/item?id=15137258), Google and Facebook have many of their assets outside the US to avoid the taxes incurred by repatriation.
A very real possibility is that Google/Facebook will have these choices:
- Comply with GDPR.
- Shut down business in Europe entirely and eventually be forced to repatriate offshore assets and incur the relevant taxes.
- Continue business in Europe, ignore the GDPR and pay a fine of 4% worldwide revenue.
- Continue business in Europe, ignore the GDPR, don't pay the fine, have all European assets frozen.
That's ignoring the possibility that the EU may be able to reach their US assets as well.
They could shut down all their European operations, but that would cost them a lot more than 4% of revenue.
There's an interesting assumption in there, and I'm not entirely sure it's a correct one. Facebook and its subsidiaries play a large part in many people's everyday lives now, in particular forming the main way a lot of people stay in touch with their friends and family. It is not at all clear to me what would happen if Facebook decided to call the EU's bluff here and literally switched off its service to everyone in the EU for a day or two, replacing it with a single page explaining that until the law was changed they would not be able to provide their service to EU customers.
Laws don't change in a matter of one or two days. So if they did this shutdown with the argument being that they otherwise can't offer their service in a legal way, it would have to be a permanent shutdown (or at least on a scale of several months, until the law is changed), because everything else would effectively result in openly admitting that they are knowingly running an illegal service.
If that happens, how long do you think it will take a bunch of companies to spin up replacements for Facebook? It's not exactly rocket science to create a social network application; most of the value of Facebook also does not lie in the platforms' code, but in the network effects that it managed to create. Thus, there would be a timeframe in which a huge number of new social networks would try to win over a critical amount of users, with one of them eventually emerging as the dominant one. As soon as that happened, Facebook would have quite a big problem in case they ever wanted to re-enter the European market, as it would suddenly have to compete with a big network with serious network effects keeping their users from re-joining Facebook, even if that was suddenly possible again. They would probably decide to shill out the largest sum of money ever to simply buy up this competitor, because otherwise there would be a certain risk of the new competitor eventually winning the global race for the dominant network (I assume that due to network effects there will always be a clear gravitation towards a single global general-purpose social network, as long as access to this network is not purposely blocked), and no matter how low this risk is, Facebook would most likely try to eliminate it (remember the large sums they paid to buy up possible dangers in the past).
Other commenters already compared the situation with China; I think that is a pretty good comparison, just that the whole development/transition phase would happen much faster, now that it is pretty well known how the final product would have to look like to be accepted by the user.
True enough, but assuming they decided to pull the stunt before the new laws came into effect, which isn't until the middle of next year, the real question is whether public opinion could be shifted in so little time.
In most cases, I'd say that was extremely optimistic. However, in this case we're talking about a service used by probably a large majority of voters across Europe, often for considerable time every day and for communications and arrangements that matter to them personally. "If they don't change this, you'll lose Facebook, Instagram and WhatsApp" is sure to get some people's attention, and all of those services going dark would probably make the front page of major news outlets. There would certainly be a lot of discussion; in fact, it might be a doubly effective move, because a lot of people would immediately then realise that without those services their main ways of telling their friends about something wasn't working.
If that happens, how long do you think it will take a bunch of companies to spin up replacements for Facebook? [...] Thus, there would be a timeframe in which a huge number of new social networks would try to win over a critical amount of users, with one of them eventually emerging as the dominant one.
I'm not sure that's how things would play out, if Facebook really did go offline permanently in Europe because of this. It would take vast resources to operate a social network on the scale of Facebook, but more importantly, before you could even start, the first thing you'd have to do is figure out how to comply with the same EU data protection rules and presumably then you'd also have to convince some serious investors that you could do it. If Facebook had already failed in that -- because obviously Facebook isn't just going to surrender the entire EU market and all its advertising revenue without a very good reason -- then why would we expect any new social network without all of Facebook's advantages to be able to comply if it was otherwise operating on the same basis, i.e., free to use but ad-supported?
> True enough, but assuming they decided to pull the stunt before the new laws came into effect, which isn't until the middle of next year, the real question is whether public opinion could be shifted in so little time.
Interesting idea. I did not think about this, mostly probably because I would never do this if I was Facebook because there is a very real risk of it back-firing. Many people here (I'm living in Germany), especially among the politically active, are in a kind of love-hate relationship with Facebook, and a clear attempt of blackmailing an entire continents' population in order to force political action in favor of an absurdly rich, multinational corporation could very well kill off whatever positive attitude there is towards Facebook in particular.
As of competition having to comply with the privacy law: of course it would have to. But I also assume that this is not impossible at all, it is just inconvenient and costly, especially if you have a huge legacy system built under the assumption that you can do practically everything with the data of your users. If you design your system in compliance with the data protection law in the first place, this gets considerably easier. Money would not be a problem at all: there are more than enough investors in Europe who would love to throw money at an attempt to create a second multi-billion-dollar money-printing machine in a market that is currently assumed to have a very high barrier of entry (but exactly that would change if Facebook gave up on Europe).
I also assume that it will not be impossible at all for Facebook to comply with these regulations, just pretty inconvenient. Under this assumption, any refusal to comply must automatically be motivated by a desire to maximize profits and minimize political influence on the platform, not by a sheer struggle for survival. Facebook's PR department might try to spin this into a different story, however...
I also assume that it will not be impossible at all for Facebook to comply with these regulations, just pretty inconvenient.
This is a common assumption, and it might prove to be correct, but I'm unwilling to accept it as axiomatic.
Fundamentally, just looking at the right of a data subject to withdraw consent, it would mean Facebook needed to track every piece of data that could conceivably be tied back to an identifiable person throughout its entire organisation. That's not just their status updates or that time a friend tagged them in a photo. It's every photo in Facebook's entire database that ever included a recognisable image of them, tagged or not. It's every line in a log file that was saved by an engineer investigating a server glitch that relates to any activity that user took. It's everyone who uploads their contacts to find friends and has one of those data subjects in their contact list.
Now, I'm not saying I think Facebook should necessarily be able to do all of the above. In particular, I have often questioned their hoarding of data from things like contact details and photos that will inevitably include other people who may not have chosen to use Facebook or give their consent.
But I am questioning whether it is practically viable, even for an organisation with Facebook's scale and resources, to follow the letter of this law and still operate at all while continuing to provide similar services, if a few people decide to make a point and explicitly deny consent to hold any data about them, even if such data was supplied by other people. There are practical, ethical and legal issues here about third parties and automated systems that we have barely begun to explore, and we're talking about them at a scale where businesses like Facebook and Google have already had to invent new techniques and strategies for organising data just to cope with what they already do.
None of this has even touched yet on whether Facebook would still have a viable commercial model if users have a right to opt out of processing their data for purposes such as advertising but Facebook isn't allowed to deny them service in return, which is another interpretation I've seen talked about a lot (on the basis that an opt-out that stops you using something independent as well isn't a true opt-out and so wouldn't count). So far, I haven't studied the GDPR and informed reviews of it enough to reach any firm conclusions or opinions on that side of things, but again there are surely issues about the obligations of an organisation that offers a useful service but relies on advertising to fund it that go far deeper than just Facebook and the GDPR that haven't really been explored up to this point.
As surprising as it may seem given my comments in this discussion, I'm actually a pretty firm believer in stronger privacy rights and a confirmed sceptic when it comes to the big data hoarders like Facebook and Google. But I'm also someone who runs businesses and has first-hand experience of what happens when the EU's non-technical legislators meddle in technical issues they don't fully understand, often missing even the blindingly obvious consequences, never mind the more subtle and/or long-term implications. So I don't think we should dive into changes like this without considerable thought, and contrary to what various officials from the EU and the national data protection authorities like to say, I don't believe for a moment that this sort of change is a small, incremental development of the existing privacy frameworks we already operate under.
> It's every photo in Facebook's entire database that ever included a recognisable image of them, tagged or not.
That is from my understanding wrong. In fact Facebook would not be permitted to establish the link. Just you appearing in the picture but it not being your picture does not qualify for personal data.
Any information that relates to an identified or identifiable natural person is personal data for these purposes. This remains essentially the same, except for a slightly broader specification of what constitutes identifiability, under the GDPR as under the current EU framework. In particular, under the GDPR, identifiers explicitly include factors specific to a person's physical or genetic identity.
In short, if you're in a photo and it's recognisably you, it's personal data.
I don't understand what you mean by "fundamentally scoped" in this context, so perhaps we're talking at cross-purposes here.
However, the wording I used before was actually taken directly from the GDPR itself[1].
Moreover, the interpretation that an otherwise unidentified image of someone may become personal data even if collected incidentally such as in a photo taken for other purposes or on CCTV is supported by among others the ICO (the UK's data protection regulator). There are a few specific examples in some of their published guidance [2,3,4].
Unless your argument is that Facebook isn't processing those photos in any way that would cause that interpretation to apply? In that case, I can maybe see how your argument works, but given what we know of Facebook processing uploaded photos just from the features they offer publicly, it's hard to imagine how their processing could possibly not be sufficient for all photos they hold to be treated as personal data.
> I don't understand what you mean by "fundamentally scoped" in this context, so perhaps we're talking at cross-purposes here.
If you have a social network and you have a user A and a user B. Each of them uploads a picture in private showing both of the users on the image. If user A deletes his or her account the picture in user B's account is not to be deleted even though it contains a picture of user A.
But if user A withdraws their consent for the social network to process personal data about them, user B's photo is still personal data about user A if the social network knows or could know that user A is in it.
As I mentioned before, modern technologies raise complex issues about third parties that we have barely begun to explore. SOP at social networks is very much to get people to provide information about not only themselves but also other people they know, and that's a minefield if those other people aren't happy about it. Obviously you can't just say social networks can do what they want if someone else provided the personal data, because that undermines the entire principle of data protection and privacy. But equally, if you require explicit consent from everyone for everything, you create a huge burden that might make the whole idea unworkable or at least remove a lot of the value these services offer to their users when maybe a lot of people wouldn't have a problem with, say, a friend tagging them in a photo anyway.
As things stand, taking the GDPR at face value, I don't see how it would be legal for a social network to retain any photo in which someone is identifiable if that person doesn't consent, unless that social network also took rather dramatic steps like avoiding any sort of automated processing and analysis of photos that might identify people in them, as well as removing features like letting a user tag someone who isn't a member of the social network.
> But if user A withdraws their consent for the social network to process personal data about them, user B's photo is still personal data about user A if the social network knows or could know that user A is in it.
Except that is not included in this. You are in fact not even supposed to retain data to identify a user after their account has been used to match them on other data.
I don't know the law by heart right now but I had discussions even a year ago with people consulting on this about this very topic what to do for such cases.
This law was not drafted in a vacuum where nobody looked at real world situations.
This law was not drafted in a vacuum where nobody looked at real world situations.
Sadly, given that we're talking about an EU law in a technical field, I suspect that what you just wrote is actually quite close to what did happen. That would be consistent with other recent EU rules affecting creative and technical businesses. In some cases, even senior EU and national government figures have admitted that those involved hadn't seen major unintended consequences coming at all, at least not until it was too late in the process to avoid them.
Essentially, the EU often exhibits good intentions and its laws might be made with laudable overall goals, but it frequently produces poor implementations that haven't been thought through in enough detail before legislating. So far the GDPR is shaping up to be another textbook example, with perhaps a side order of political football so the EU can beat up big US tech businesses because the EU's business environment hasn't resulted in creating equivalent services of its own.
This doesn't seem healthy for either our tech industry or our society as a whole to me. I'm actually a rather strong advocate of privacy online, but rules intended to protect it do need to be reasonably clear and practical or they're not going to be worth very much.
The track record of EU legislation is generally rather good and based on feedback I have seen that in particular American companies have on GDPR it's already succeeding in what it's there to do: raise awareness of data not being an asset but a liability.
We will see soon enough how this plays out. From where I'm standing I'm very welcoming of this development because it's the first time I see an actual attempt of companies doing something that is in the interest of the customer when it comes to data.
Counterpoint: When the Internet went "dark" over SOPA, people noticed, and SOPA died.
If everyone in Europe woke up one morning next year to find Facebook saying that as a result of new EU law they had been required to turn off their service and please would everyone write to their representatives (using the form conveniently provided below) to ask why they'd done this, what do you think would happen?
For a lot of people, if not nearly all of them, it's a lot less effort to migrate to a new social network with your friends than it is to lobby to have some laws repealed, especially when it's clear that these laws are in the interest of the public and have been put together with serious thought and consultation.
What if they said 'sorry but -- starting next month, we will have to charge you for accessing parts of facebook. We will have to do this until the EU changes the law.
I bet customers would rally against the law rather than pay microtransactions.
Or use a competitor who complies? Maybe Google+ would finally take off in the end? Don't underestimate how important privacy is to many Europeans. People are seriously worried about how their data is used.
Don't underestimate how important privacy is to many Europeans. People are seriously worried about how their data is used.
Sadly, I think it's quite clear that most people are not that worried about it, even in Europe. Or at least, if they are, they're willing to put up with it for the convenience of the services they get in return.
The real problems, IMHO, are a lack of awareness particularly among non-technical people of what is really happening and its implications, and a lack of competition so that those who do value their privacy more highly can choose to protect it without giving up normal parts of modern life.
The advantage of having armies and police forces is that you can lock people up who don't adhere to your rules. Good luck having any employees in Europe if you decide to ignore their regulations!
The EU has a nuclear option for google. Ireland and the netherlands are both in the EU. If google loses its incorporation in those companies, it might have to pay US taxes on repatriated US holdings.
Ireland and the Netherlands subcompanies are essential part of US Corporate tax evasion. (Double Irish with a Dutch sandwich).
The EU doesn't have the power to do that. The EU is more like a bunch of international treaties between sovereign states than an apex of power hierarchy.
Used to be true but increasingly less since the euro crisis tbh. I've lived in two countries in the EU (native to one of them) and the perception is definitly that a lot of power has shifted, and not for the worst. Honestly small EU countries love the EU for the strong currency and the appearance that they have for making smart legislation like this one, the ban on roaming charges, etc. even the windows without IE thing was a good idea, while poorly implemented. Without the EU small countries would have little to no power over global corp anyway, so this is essentially the EU delivering on its promise.
Couldn't they just take the money owed the usual way, by freezing accounts, confiscating assets, etc.? None of these corporations can feasibly avoid that without exiting EU markets altogether.
Plus, as long as both EU and US are part of WTO, they are required to enforce each other's laws. Otherwise Europe could legitimately raise tariffs for the whole IT sector...
> None of these corporations can feasibly avoid that without exiting EU markets altogether.
Ultimately if the fines or compliance costs get too high that's exactly what happens. A company that does no business in the EU is not subject to EU rules. But because it is such a large and wealthy market the threshold is very high. If Kazakstan passed a similar law it'd be a very different story.
Probably the EU states would unceremoniously and without further warning walk into their EU datacenters and seize every piece of hardware they own, which is quite a lot.
The following outages would impact their US customers, who would sue under US jurisdiction.
The reality is that the 2% or 4% figure depends on which part of the GDPR you have breached and isn't a tiered approach. Breaches of core requirements (for example valid processing grounds) will attract a 4% fine straightaway technically speaking.
Art 83 covers the different triggers.
Having said that there are various schools of thought around levels of potential fines, including from different data protection authorities in the EU.
Considering the ICO in the UK is yet to levy a maximum fine despite egregious violations is at least one factor to suggest fines will not increase dramatically.
Also there are not only increased fines to consider but an increased focus on compensation to data subjects in the event of a violation of their rights (together with an evolving case law to support that in the UK at least).
I'll eat my shoe if a regulator ever gets even 2% total global revenue out of any of the top 100 software companies based on this.
In reality a bunch of small shops are going to go bankrupt because they don't have a "GDPR implementation" position filled and they didn't do some report properly.
Yes, the EU regularly kicks the asses of companies to help consumers in Europe. I'm sure we won't get the same protection on pesticides, pharmaceuticals/medicine, GM, white goods, monopolistic practices like this, roaming charges, etc. etc. once the UK leaves the EU.
For startups - GDPR is like Y2K at the time, a GOLDMINE.
Unless you're a start-up that handles personal data, in which case it's another bureaucratic overhead that also carries a risk of draconian penalties if you make a mistake, even if you have perfectly sensible reasons for working with that data and you're not doing anything at all surprising or dubious with it.
Of course, the EU has form for this, given its similar approach to both consumer protection and VAT rules in recent years. It does seem to have an unhelpful habit of imposing regulations at big business scale to deal with big business scale problems, but not considering that both of these may be wildly disproportionate for smaller businesses.
> a risk of draconian penalties if you make a mistake, even if you have perfectly sensible reasons for working with that data and you're not doing anything at all surprising or dubious with it.
GDPR explicitly mentions that "warnings" and "periodic data audits" should be considered measures to take before the fine is applied.
It also says[0] that when deciding on the fine, due regard shall be given to nature, gravity, and duration of the infringement; degree of cooperation, intention or negligence, actions previously taken by the authorities, previous infringements, nature of data etc.
It seems unlikely that anyone in good faith would get screwed by this.
It seems unlikely that anyone in good faith would get screwed by this.
Most small businesses don't have a lot of outside investment, if any. If you're running a bootstrapped business funded with your own savings, the last thing you need is to have to spend significant time and money figuring out where you stand on GDPR compliance and what you have to do with regards to any other organisations that your business in turn depends on. The theoretical fines aren't the most immediate problem for small businesses; the overheads are.
> Keep in mind that the EU/US Safe Harbor agreement got axed due to a lawsuit of a single student from Vienna against Facebook.
Almost every US supreme court decision come from a single guy challenging something. Are you suggesting that under a certain size, one shouldn't be allowed to sue in court?
For startups - GDPR is like Y2K at the time, a GOLDMINE. So much opportunity to sell solutions, from real to snake oil. GDPR compliance is already and will continue to trigger a massive wave of investment.
I'll start by saying that I have found myself leaning in favor of this law -- I've made a much longer comment about it and won't rehash it, but I wanted to make sure my statements that followed weren't taken as a blanket anti-GDPR but rather a devil's advocate response.
I take issue with the quoted statement because it ignores the downside for startups[0]. Companies like Google, Facebook et. al., have the money and time to hire teams of lawyers to find a way to work around these regulations in a manner that maximises their ability to continue tracking while minimising their risk in getting smacked by the hand of the law. Getting hauled off to court won't bankrupt them and they have the legal teams to probably win regularly enough. Even barring that, they have the finances to adjust their business to be fully compliant (in whatever degree business adjustments require) without going bankrupt.
Joe's Advertising Supported Free Service does not. Joe's not going to start his own ad network and start mining personal data for it -- it's way too expensive to try to compete with Google/Facebook (and it was already way too expensive to do it, before). If he does, he's the one who's going to get hit with the second and third strike; probably from that "single pissed off German customer".
Investing in firms that touch this space will be met with far more skepticism. I wouldn't be surprised if any company that simply asks for a user ID and password won't face a little scrutiny from investors, at least until the regulatory atmosphere is understood (I doubt it'll be that extreme for terribly long, but one bad court ruling/fine laid out where it wasn't expected could change that). The cost of establishing many, many kinds of companies will now increase because the risks are high enough that going to market without having your legal bases covered on this one. That money has now been shifted to a business who -- potentially -- is selling snake-oil (and startups are going to be more likely to do business with that snake-oil salesman since they'll probably also be the least expensive).
Then there's the "unintended consequences". Here's a crazy hypothetical, but a lesser variation of it is plausible if this were a US law: Some individual exercises his free-speech rights and chucks something up on the Internet that has a bunch of horrible things on it, say, like 'a guide on how to slaughter and prepare kittens for healthy and inexpensive dinners'. Some kid reads it and kills/eats his neighbor's cat. The guy didn't do anything illegal, really, but his web host knocks him off the web and people are calling for blood. He happens to use an ad-network, but doesn't, himself, collect personal information. However, this ad-network does, and at one point was nailed under this law. He uses the ad-network, so some overzealous prosecutor figures out a way to bring it in front of a judge that he's responsible for what this third-party did and should also be prosecuted. At the height of outrage, a jury isn't hard to find to connect the dots[1].
[0] And hey, that's fine, you're an internet commenting individual just like me -- we don't have to present both sides of the story -- that's the replier's job.
[1] Yeah, I took that a little far, but I think back to when "The Columbine Massacre" happened and everyone believed it was FPS video games that warped those evil children's "precious little minds". It took all of two seconds to call for banning violent video games (constitution be dammed), many idiotic and ultimately overturned laws were passed, and if there was some way to haul the developers who wrote the game off to jail (I think they were blaming DOOM at the time), it would have been possible within those first few weeks.
> Joe's Advertising Supported Free Service does not. Joe's not going to start his own ad network and start mining personal data for it -- it's way too expensive to try to compete with Google/Facebook (and it was already way too expensive to do it, before).
Since it was already too expensive to roll his own, Joe's site will simply include content from whatever ad network he chooses. All he has to do is make sure that the network is GDPR compliant. If he didn't think to do that, the first warning should give him the necessary time to find a different ad network or to specially handle EU-based customers.
Your "kitten meal" example also wouldn't work, since the first violation doesn't carry a fine, and if the website has been taken down, there is no possibility of a second violation.
So I think the changes won't affect ad-supported businesses themselves all that much, they will simply lead to a restructuring of the supporting infrastructure to become compliant. In the worst case, every website will have a huge "opt in to tracking" modal you have to click, like the cookie policy.
In that case it's the EU's way of telling you that your startup idea isn't that great. Basing a startup idea on illegal techniques is not a good idea. Snapchat could've succeeded without that data so there's no reason why startups cannot adapt.
Strike 3 - 4% of your TOTAL GLOBAL REVENUE (or 20mil EUR, whichever is higher)
Why would they impose a 20M limit and not stick to 4% revenue irregardless of it...
edit: I'm not sure that the down voting is about.. it's still a limit, a limit that means a company turning over 0-500M will pay up to a 20M fine.. not so bad the closer you get to 500M but not so great if you're a small company, especially so as the regulation is so open to the "law of unintended consequences" right now and only larger companies will have the funds / man power to navigate it.
I understand your point, but I still think it's wrong. The minimum on the third strike is so a little-to-no revenue startup or shell company or something doesn't willingly abuse data because the fine will be 4% of nothing.
While I see some of the concerns about the _technicality_ of the law as completely legitimate, it still bothers me that so many people reject the whole spirit of this law, and cannot put the negative of "tax on startups" against the much greater good of personal privacy.
I've just started a business myself, and this regulation affects my company too. It makes development costlier; it'll take from the precious little time we have to spend on compliance paperwork rather than work on our core business. In the short run, it does hurt our chances of success.
Yet, none of the trouble is even comparable to what's to be gained here. And it bothers me (though doesn't surprise me) that some people don't see that.
It also bothers me that such vocal opposition barely comes up when the discussion is just about bigger companies such as Google and Facebook. How can we expect "un-evilness" from bigger companies when we're barely willing to do anything in that regard ourselves?
I wonder how this will interact with accounting standards. Ledgers have historically been immutable. If I buy something from you and then demand deletion of my data, do you need to revise all previous financial statements to make it appear as if the transaction never occurred?
The spirit of the law is nonsensical. It makes all commercial activity illegal, to the extent that all businesses keep records of their sales, inventory, etc. which reflect the activities of their customers, employees, and suppliers.
Possibly because personal privacy as a "much greater good" is open to debate. People place wildly differing values on that property.
Google's current ecosystem of data-sharing means that Assistant can make educated context guesses on what I mean when I talk to it based on my browser history and map navigation history. If the new privacy constraints damage that passive interconnection, that's not a net good for me.
> ... privacy as a "much greater good" is open to debate
I agree, but that's a different topic, really. The comments here aren't about the (un)/importance of privacy. The main debate seems to be either about the technicality of the law and its possible unintended consequences, which are legitimate concerns, or they're about how "this is gonna make my job much harder," which is not really a legitimate concern in this context, and those comments were the ones I was talking about.
> Google's current ecosystem of data-sharing means that Assistant can make educated context guesses on what I mean when I talk to it based on my browser history and map navigation history. If the new privacy constraints damage that passive interconnection, that's not a net good for me.
I think we're overestimating a technical difficulty here, and downplaying a moral principle.
Providing a personalised service without storing large amounts of personal information in a central location is not impossible. It's just technically harder to do.
And even if it was impossible, then still, we need to sort out the moral consequences first. Not by banning technological progress of course, but perhaps by bringing more oversight to corporations. Or by making sure that people of lower socioeconomic background aren't hit harder than the wealthy.
I encourage a little more thought before cheering this on as a win. While GDPR isn't as ridiculous as the Cookie Law, it still shows that the EU/EC don't understand the technology they are trying to regulate, and it comes at a huge cost to tech companies.
Take the right to be forgotten. First of all, it should be common sense that no one has the right to force legitimate news articles to disappear because they don't like the content, but that is what the EU has ruled should happen.
I get the desire to have a company forget about you, and remove all the personal information they have. It makes sense from a personal standpoint. But how do you do it technically?
If you follow GDPR strictly you would need to be able to purge the data from your backups. Now most backups are considered immutable, so you aren't going to do that, meaning you need a way to ensure that "forgotten" users never get restored.
But how do you even delete the live data? Does the tech company you work for have the ability to delete all traces of a user from their system, cleaning severing all relationships with other objects in your system? Do you have the ability to retrieve everything you know about a specific user, and provide it to them? You will need to write the code to do this.
There is a good chance your little startup that isn't cash flow positive will have to spend $1 million of its VC money on becoming GDPR compliant.
Do you sell a SaaS service to businesses, and those businesses send you their customer's data? Then you are the processor and they are the controller. Cool, less for you to do, sort of. Except that controller must agree to every sub-processor you use. Want to switch from AWS to GCP? You can only do it if all your customers agree. Want to use try out a new metrics or logging service? If it will have any PII you can't do it without customer (controller) permission.
You will basically need to hire full-time compliance officers to deal with this. The big tech companies already have compliance officers, but GDPR is so massively invasive to businesses that even small companies now need compliance officers.
> First of all, it should be common sense that no one has the right to force legitimate news articles to disappear because they don't like the content, but that is what the EU has ruled should happen.
No, it is about deleting personal data attached to your user account, not "news articles". This thing intends to make the "delete my account" button to actually, you know, "delete my account", instead of fake-deleting it by setting a "deleted" flag and telling me that everything is gone now while still keeping gigabytes of data associated with me in your database.
> [...] meaning you need a way to ensure that "forgotten" users never get restored.
If this is considered to be a hard problem, then I assume storing some list of deleted users in a separate place and immediately purge those users from the backup after restore must be some kind of rocket science.
> There is a good chance your little startup that isn't cash flow positive will have to spend $1 million of its VC money on becoming GDPR compliant.
I wouldn't call it "to become GDPR compliant", I would call it "to build a sound database structure". Because if you are unable to purge all data associated to one of your users' accounts from your system without destroying the integrity of the rest of your data, then you obviously have a half-baked system at your hands that lacks a core feature - to actually delete accounts. And you surely should spend some of your money to refactor this crap into a long-term viable solution while you are still small and agile enough to do that. Because it's only going to be way more expensive later on...
You missed the point: if I take an image backup of a disk and store it, even in encrypted format, and it contains John Doe's account, and he comes along and asks for his account deleted, I would have to purge not just my database record, but the backups images from the past as well. That invalidates ALL of my backups. It's not often practical to backup INDIVIDUAL users..
This is very easy to work around. Associate each user with an encryption key and store in a separate database which backed up with some retention time. It shouldn't be huge so that's not a big problem. Encrypt all data related to the user with this key. When you make a backup, this data is stored in the same encrypted form. When you delete user, just delete his record together with this key. After this the user's data is virtually irretrievable and for all practical purposes is deleted.
Obviously, you can't keep insisting on immutable backups. Instead, you'll have to modify a single key in the backup file to become invalid.
This does make your backups slightly less reliable, because it's one more thing that touches them, but if you do a sane implementation and exhaustively test it, the risk is manageable.
I'm also not sure you really need to keep that many backups of this file. Replicate it and make sure you can roll back when your replication is borked, but if you really need to restore your database from months ago, using a newer list of encryption keys shouldn't be a problem.
Does your data not have a lifetime anyways? Do you really need to store everything forever?
If you have system that just tracks changes and one that occasionally records full state, after you delete someone from prod you could simply overwrite old full-state backups with your new, post-deletion backup and update your change-only backups to replace data about that user with `deleted`.
There is also a pretty easy cryptographic solution for your offline backups:
In your backup, encrypt each user's data using a per-user key (AES or something). The keys will be tiny, so you can store the keys in a hot database. When a user deletes their account, simply purge the user's key.
Tada - like magic all of that user's data on your tape backups has turned into unreadable noise.
I think you're assuming that most companies roll their own backup systems. You're underestimating the number of companies for which their backup system is something like Windows Server Backup or Veeam which isn't doesn't have that kind of granularity.
This makes me very nervous about data loss. If I accidentally wipe the hot database then I've effectively deleted all my backups. It also makes corporate ramsomware attacks much easier.
So keep a backup of your hot database in S3 or something, and make a workflow by which you can periodically update it. Or make a write-only backup of your encryption keys every day and only keep the backups for a week (or a month, or whatever the legal requirement is).
Complying with this requirement will require us as an industry to make some changes to how we store user data. But the amount of work each company needs to do is proportional to the complexity of our existing backup system. If you're a tiny startup and don't keep offline backups at all, you can just delete the user from your database. The more complex & rigorous your backup system is, the more complex your user deletion system will need to be.
Its a hassle, but no moreso than any other requirements we deal with on a daily basis.
I don't mean this to come across as disparaging, but for people saying things along the lines of "Its a hassle, but no moreso than any other requirements we deal with on a daily basis.", I can't believe you have any experience with data recovery at a large company.
A fundamental axiom of data recovery is that you don't really ever "delete" anything, because accidentally losing data is considered such a horrible problem. So virtually all backup systems consider backups fundamentally immutable, because everyone knows how just one small bug in something designed to modify a backup could fuck the whole thing.
I don't really disagree with the spirit of the law, but I think the backups problem to honestly be pretty much a technical impossibility for large companies, it will basically just get ignored (the backups part at least).
> make a write-only backup of your encryption keys every day and only keep the backups for a week (or a month, or whatever the legal requirement is).
A month would be better than a week, but it still allows a situation where the wrong user account is accidentally deleted and the mistake is not noticed for a more than month.
I think the best approach is for the EU regulation to give the company 90 days to delete the user's data. The company can then just have a cron job to delete older backups. Simple.
There's been a wave of targeted ransomware attacks recently that spend time surveilling and infiltrating their targets before holding their data for ransom. If companies take your proposed approach, some of them will find their backups have been compromised and be forced to pay up by ransomware attackers. It's simply inevitable.
As long as you have clearly defined retention policies I don't think saying, "your account has been deleted. It will take 90 days for the deletion to filter though our backups. After that point all of your data will be gone forever" would go against the spirit of the law.
If you mind answering me this, I'm having a hard time figuring it out myself: Will the GDPR mean I'll finally be able to completely delete myself from Facebook?
Doesn't it just affect companies which rely heavily on lack of privacy for monetisation? I think that's sort of the point - that your business should not rely on tracking individuals and selling that information without their consent to gov/private bodies. It's obviously a huge change, since so many big tech players rely on this to make profits. But the internet will be a much nicer place for everyone else if right to privacy is protected.
"the application of the GDPR will prevent them from using these personal data for any further purpose unless the user permits."
This seems incredibly broad from the article and would touch nearly every startup. Maybe there are limits on the businesses affected? Otherwise I'm not sure how one could formally define "rely heavily on lack of privacy for monetisation."
This law will affect everyone but it seems like it will devastate companies that have no monetization strategy other than collecting user data and either selling it or mining it for targeted ad placements.
https://unroll.me/ is a good example. They provide a free service to users but make money by leveraging their total access to your inbox to sell ad analyics and competitive intelligence.
It could also devastate small companies at an early stage which need to dedicate resources to get into compliance rather than building their product.
It might hurt the bad players but it really depends on how readable the text will be to the average user. If it's going to be a checkbox it will likely not do much.
Doesn't it affect everyone with a user account system, storage of user-generated data, error logging, usage and performance metrics, etc?
HN itself is illegal under EU regulations because you can't delete old comments, and we know that the admins know how many RPS they are getting but I haven't specifically opted in to using records of my
HTTP requests for traffic monitoring.
You need to research the right to be forgotten as it applies to news articles, it doesn't work in the way you describe.
And as far as this stuff being difficult to do, sure, but isn't it worth doing? Why shouldn't a customer have a say which cloud provider hosts their data? Why shouldn't we be able to make sure no data is kept about us after we stop using a service? Like with anything novel in software it only seems hard to do because we haven't done it, but in a ground up design it's not that hard to add gdpr compliance, and a few years down the line this stuff will be business as usual.
You don't need a compliance officer, but you do need a security officer, and their job now also involves data lineage, not just data security. You already should have that person if you're building a SaaS solution.
> You need to research the right to be forgotten as it applies to news articles, it doesn't work in the way you describe.
I was referring to Google vs. Costeja, which I realize isn't GDPR, but an EU court did rule that way.
> You don't need a compliance officer, but you do need a security officer
I strongly believe that compliance and security are two very different things, that are only slightly related. They come at it from a very different perspectives. A security engineer should be doing threat modeling and protecting against threat vectors. A compliance officer may consult a security engineer, but ultimately their job is to check boxes to make sure regulations are followed. I think compliance staff are more appropriately part of a legal team than an engineering team.
That isn't to say compliance officers aren't useful. Having a strong compliance voice can be great. I've seen companies without a compliance officer reduce security because an auditor told them regulations required something. A good compliance officer would have been able to push back against the auditors, pointing out what regulations actually require, and working with the security engineers to come up with a solution that meets regulations and actually improves security.
To be more precise, I think the problem they are talking about is that they may still need to remove it from search results even if the original source still has the right to continue to process the info. A compromise would be to eventually void this right after a certain time (maybe 2020) via a GDPR exception for example.
> Take the right to be forgotten. First of all, it should be common sense that no one has the right to force legitimate news articles to disappear because they don't like the content, but that is what the EU has ruled should happen.
That is not correct. The right to privacy is not an absolute right. It has to be balanced against other rights, such as the right to free press. In a normal news article case, free press would prevail.
> There is a good chance your little startup that isn't cash flow positive will have to spend $1 million of its VC money on becoming GDPR compliant.
I advise a lot of small customers to implement manual procedures to retrieve or delete data in case a request for it might be done. And to set up a basic privacy and security policy which they should have had already. This doesn't cost much.
> Except that controller must agree to every sub-processor you use.
This can be a generic agreement where the processor notifies the processor.
> Want to switch from AWS to GCP? You can only do it if all your customers agree.
Not true, you do however need to be able to tell customers what companies receive their data. Which can be quite a challenge with sub-sub-subcontractors.
Want to use try out a new metrics or logging service? If it will have any PII you can't do it without customer (controller) permission.
Not true if the processing agreement contains a clause that instructs processor to perform metrics or logging. Customer consent is often not needed unless it has big impact on their privacy. Consent is only one of the legal grounds.
> You will basically need to hire full-time compliance officers to deal with this. The big tech companies already have compliance officers, but GDPR is so massively invasive to businesses that even small companies now need compliance officers.
If this were true I'd be a lot busier. It would be wise if companies assign the responsibility for privacy and security, but it doesn't always need to be a full time job with a level background.
> You will basically need to hire full-time compliance officers to deal with this. The big tech companies already have compliance officers, but GDPR is so massively invasive to businesses that even small companies now need compliance officers.
Or they could, I don't know, just not collect that data in the first place.
It isn't that simple. When people think about these privacy laws that is what they think about, some evil corporation tracking tons of private info.
The company I work for has no ads. It does no analytics on personal info. It does nothing you would care about. What it is is a SasS product for businesses. We aren't the controller, so we don't need permission from end users, our customers need to get permission from their customers. But end users can ask our customer (the controller) to delete data, and our customer can ask us (the processor) to delete it.
Fine, we now have an engineer building GDPR features instead of features that benefit our customers. Oh well. But it isn't as clear a win for end users as people make it out to be when they only come at it from an anti-Google and anti-Facebook perspective.
It sounds good in theory, but things like Article 28 certainly make it harder to move quickly.
> The processor shall not engage another processor without prior specific or general written authorisation of the controller. In the case of general written authorisation, the processor shall inform the controller of any intended changes concerning the addition or replacement of other processors, thereby giving the controller the opportunity to object to such changes.
> Fine, we now have an engineer building GDPR features instead of features that benefit our customers.
I am not finding any shred of sympathy for your story. To me this sounds approximately as evil as saying you are a pipeline company having to comply with all of those pesky environmental and occupational regulations by spending money on worthless safety features for people working and living on or around the pipe, and that none of this benefits your customers: the oil companies.
Yes: you built a bunch of software around a specific set of assumptions about what you were allowed to do, and in the process you took advantage of cost savings by ignoring externalities such as information privacy, and now that this law exists you will be negatively affected. However, the point of this law is to say what you were doing was NOT OK and that future companies should not do this and existing ones had better figure out a way to stop doing this.
In a perfect world, everyone would have built these featurs in to their systems without this law, but they didn't, so now you all are going to get punished. If your business is still possible (and I have no particularly care if it isn't) and any of your competitors had spent the in your mind wasted effort making sure this was possible in the past, then I am not just OK with but extremely delighted that they will now have a competitive advantage over you as you scramble to retool.
You are essentially asking for sympathy here without first taking a step back and showing that any of what you were doing was not just expedient for you, and not just beneficial to you, but that it was also simultaneously what people other than you deserved: the presumption here is that you are the villain, and it is really hard to ask for sympathy from that position, and I can tell you all you are doing from my reading is digging yourself a deeper pit.
You seem to be responding to an entirely different GP post to the one I read, which seemed pretty clear that the GP's company isn't doing the kind of tracking and analytics that a lot of people might say were "NOT OK".
It's easy to post bold privacy advocacy from the cheap seats, but I suspect you wouldn't like a world where these new rules really were enforced to the letter. Many of the organisations whose products and services make your life better in some way would most likely cease to exist, and the economy on which your personal quality of life depends would surely take a huge hit.
GP's company isn't doing the kind of tracking and analytics that a lot of people might say were "NOT OK".
GP's company isn't doing the tracking and analytics, but it is pulling data from companies that do. Therefore, regulations that affect GP's customers affect GP. This is right and proper, and I don't see what the problem is.
This is right and proper, and I don't see what the problem is.
The problem is that it will be almost impossible to comply with the letter of the law in this case without either imposing prohibitive levels of overhead or disregarding other good practices like logging diagnostics and keeping robust backups in case things go wrong.
There's a saying about babies and bathwater, but this is more like requiring the entire house to be rebuilt in order to throw out the bathwater. Sure, you can do it, but it's much easier to say that when it's someone else's manual labour being paid for by someone else's money that will make it happen.
The problem is that it will be almost impossible to comply with the letter of the law in this case without either imposing prohibitive levels of overhead
If the business requires this much overhead in order to internalize the data-externalities that it's generating, the business does not deserve to exist. Privacy violations are an externality, just like pollution, climate change, or deforestation. The way we deal with these externalities is through regulations and taxes that force businesses to internalize the costs they're imposing upon the rest of us. OP's business is like a chemical plant that gets its feedstock from polluting suppliers. If pollution regulations make the feedstock prohibitively expensive, then it's a signal that the existing process for making the product product wasn't providing a net economic benefit to society, and that the process needs to be either reengineered or shut down. By the same token, if privacy regulations make your product unprofitable, then your business model either needs to be reengineered, or you need to shut down.
There's no rule saying that cities have to be covered in smog. Likewise there is no rule saying that online media has to be funded through advertising. In the case of pollution, externalities that appeared to be inevitable turned out to be the result of choices resulting from economic incentives. When regulation changed the incentives, the externalities were massively reduced (as evidenced by the fact that Pittsburgh today has some of the best air quality in the US). I'm confident that the same is true of online media. The only reason that it's funded by privacy-violating advertising is because privacy-violating advertising is the cheapest and easiest business model. But if you take that off the table, businesses will be forced to innovate and come up with new payment structures that better align their interests with those of their customers.
If you want to work with other people then at some point you will often have to share some information with them so you can work together. Privacy can't be measured only in terms of absolute control over who has information at all. As a practical matter, it has to be more nuanced, also working at the level of how someone is allowed to use information given to them.
Now, there's plenty of scope for debate about that, for example in what uses should be accepted as reasonable by default, what should require explicit consent, and what should be subject to someone opting out even if it's allowed by default. Much of the data protection framework in Europe, both past and near future, exists in this space.
But there also has to be a balance, because if you start assuming ill intent and trying to prevent anyone from doing anything with personal data just in case it might be leaked or abused in some hypothetical future, you stop being able to work with other people effectively at all. In this context, paranoia is no more helpful than complacency.
> Fine, we now have an engineer building GDPR features instead of features that benefit our customers.
GDPR benefits customers. What you say is similar to justifying not to provide good security with the reason to benefit customers or justifying not to provide safety features in cars. After all, it happens not that often.
>The company I work for has no ads. It does no analytics on personal info. It does nothing you would care about. What it is is a SasS product for businesses. We aren't the controller, so we don't need permission from end users, our customers need to get permission from their customers. But end users can ask our customer (the controller) to delete data, and our customer can ask us (the processor) to delete it.
So you do no analytics on personal info, but if someone wants their personal info deleted, you have to delete some of your data.
How does that make any sense? Doesn't that imply that you are, in fact, using their personal info? Or do you subscribe to a moral theory where e.g. browser history is not "personal info" and this is a gripe about how regulators disagree?
Based on what they said, it sounds like they store personal info on behalf of their business clients, but don't look at it themselves. Their point is that this law still requires them to implement the granularity to delete individual records so their clients can be compliant.
(I don't think it absolves them of any responsibility to implement privacy measures, but it does at least make sense.)
I don't understand how this is a surprising feature. Shouldn't every decent system be able to delete all data of one user without affecting other data?
That companies haven't even thought of being able to delete user data makes the law even more important. It should be common business practice to delete data if a user asks, not something technically impossible.
No, because most systems facilitate interactions. Do you get to erase transactions from other people's accounts? Messages from other people's inboxes? Posts from other people's comment threads? None of these things are obvious.
First of all, it should be common sense that no one has the right to force legitimate news articles to disappear because they don't like the content
I think a lot of EU people (I'm not one) would disagree with you here. The notions of privacy and of the goals of the criminal-justice system in several parts of Europe are radically different from the notions your "common sense" position is based on, which means that what seems "common sense" to them seems ludicrous to you, and vice-versa.
It may surprise some HN readers that 1984 was in fact not about a lack of pricacy, but about the government's ability to edit the past. Privacy had a minor role next to the ability to redefine the truth. This is a deeply scary thing. I hope the gleefully self-righteous dystopia builders in this thread get to suffer the full pain of the "right to edit other people's memories" world they are building.
Thanks for phrasing your concerns about this regulation in such a nice way. I tried to articulate it in an earlier post but it was very controversially received.
Overall, this will be yet another additional challenge for EU-based startups in comparison with their US peers and keep armies of lawyers busy.
If a US company wants to handle the data of an EU citizen then they'll have to comply or be fined as well.
And if you're complying because you want users in the EU, then you might as well design your system to comply for everybody. And that's a Good Thing as more privacy is better.
I just want to point out it keeps armies of US engineers employed too. The amount of work I've had for EU companies implementing their EU regulations as a US tech worker is confusing and funny.
This is just another opportunity for easy money if you're in the states and enjoy/don't mind compliance work.
I believe that in many countries there has been laws for a while mandating the deletion of personal data after a certain number of years (for instance I believe this is the case in banking). This is not new and if your backup system does not comply with regulations, it's probably more of a design problem.
This is not new and if your backup system does not comply with regulations, it's probably more of a design problem.
Perhaps, but it's a design problem that approximately 100% of otherwise reasonable backup systems will have, and working around it comprehensively will be extraordinarily expensive.
Do we really want to impose rules that incentivize businesses storing personal data on behalf of their customers not to back that data up properly, in order to avoid any potential liability under the GDPR? Because that's exactly what this law does, as it stands.
It feels like the reaction of a VW engineer complaining that CO2 emissions regulations are making his job complicated.
Yeah, if you want the data, you need to be able to handle the data in a compliant way. The other solution is to not collect the data. Keep in mind that this is targeted at user tracking. Don't expect me to be sympathetic to the troubles of backing up all of that tracking data.
Yeah, if you want the data, you need to be able to handle the data in a compliant way. The other solution is to not collect the data.
But since that data will include things like routine server logs, back-ups of customer records necessary for statutory financial record-keeping purposes, and so on, it's never that easy. With such a broadly written law, you could spend a small fortune on legal advice just to find out what your real, practical obligations are to make a good faith attempt to comply.
Keep in mind that this is targeted at user tracking.
The intent might have been to go after user tracking, but unfortunately, that's not what the law they made actually says.
If you follow GDPR strictly you would need to be
able to purge the data from your backups.
Now most backups are considered immutable, so
you aren't going to do that
Encrypt with a user-specific key, and destroy that key to drop all backups concerning that user.
This only applies to home grown backup systems. If all you're doing are VM backups of your SQL database there's 0% chance you have the granularity to delete a single user.
"Then don't do that"
Sure, but now you actually have an imposition because the ability to do this kind of thing can't be done on any commercially available backup system.
Don't assume the law makes a distinction between data which is destroyed and data which is irretrievable. I can easily see someone arguing that the data is not destroyed -- there's simply a very, very low chance that anyone will be able to retrieve it. You and I both know that argument makes no sense, but hey, it's the law -- it doesn't have to make sense, it just has to convince a few people with power that you're wrong.
"the right to be forgotten"
This is already in EU data protection law where data needs to be forgotten after its relevancy and a cool down period, it considers and covers things like your example.
"purge the data from your backups"
Again you should already be doing this if you do business with the EU or you are breaking the law.
"Do you have the ability to retrieve everything you know about a specific user"
Again this is already in the EU data protection law you should already be able to do this or you have been breaking the law.
Im not a lawyer so take this all with a pinch of salt this is just stuff I need to know as an EU developer. Sure it might be slightly more work for US tech companies but I can't be arsed is not a valid reason to break the law. If you think it will cost to much then don't do it, there are plenty of EU companies that do.
>There is a good chance your little startup that isn't cash flow positive will have to spend $1 million of its VC money on becoming GDPR compliant.
This isn't actually that complicated if their software is designed from scratch with GDPR in mind. Current approach is collect all the data you can with the intention of selling this to data brokers. GDPRs discourages this. It shouldn't be that complicated and that expensive if you store the minimum amount of information you can to cut costs.
> The big tech companies already have compliance officers, but GDPR is so massively invasive to businesses that even small companies now need compliance officers.
This can be done away in a way small business hire contractor lawyers and accountants with an hourly rate. If you are small, you shouldn't do anything which might involve high fees from them.
This has nothing to do with the size of your company/startup and it has nothing to do with regulatory compliance. It is a pretty simple at its core: if your company/startup gets breached and as a result PII data leaked, then you are liable for the penalty according to the general rules. I don't think anybody will argue this is a bad thing. If anything, it will help companies to be a little bit more careful with what sort of data they collect because frankly, at the moment almost every company is perhaps guilty of collecting far too much personal data under the assumption that one day it may become useful. If you collect PII data then you are liable for damages if you happen to mishandle it.
So here is how to avoid the GDPR penalties.
1. Get compliant - it is pretty much ISO27001 and it will cost you money
2. Don't collect excessive PII data and if you do, store it securely - after all it is a very basic ask
3. Avoid collecting PII data at all cost - think of it as another form of PCI
> The critical question for both businesses is whether users will click “yes”, when asked to consent.
Yes, users will click yes on basically anything. Facebook could put up a message that says "In order to proceed, click yes to give us half the money in your checking account" and the majority of Facebook users will still click through. Look at EU cookie warnings. Did any of those warnings noticeably impact anybody's traffic after the first week?
The difference is that cookie warnings only inform you, they don't offer a choice. The excessive use of personal data is something that most businesses don't need for their core purpose and something that users are certainly worried about (at least in Europe).
Eh, couldn't you make the argument that the Cookie warnings showed that a government body could meaningfully change the experience of the internet? As small as the warnings are, had anything like it been done before?
Not so much a loophole but the UK Information Commissioner certainly adopted a more relaxed approach to enforcement of the requirements of the cookie law compared to other countries.
And the law required prior informed consent to cookies with opt-out not generally being considered to be valid consent.
Talking from experience from a "new" EU country with solid IT scene: the "cookie law" is not enforced at all. Google would be probably the only one theoretically enforcing it (most of websites do show a message because they use Google Analytics), but it would go against Google's interest, so they are not really enforcing it either.
No, they won't. When the EU imposed new consumer protection rules not so long ago, it resulted in having to put some scary-looking legalese directly on your sales funnel pages if you were supplying digital content, even if said legalese was of no practical value to anyone including your customer. That alone was enough to hurt conversions, even if you didn't require something like a token checkbox to be ticked before continuing. The GDPR compliance requirements are potentially on an entirely different scale.
Clicking yes might not even be necessary: I recently went to a laywer-oriented event (IANAL) that discussed the GDPR and it had a cheerful talk about "Alternatives to Consent"
The talk listed all the possible ways the law allows you to store/manipulate user data without requiring explicit consent... There are a shocking number and iirc they apply basically whenever you have a direct consumer relationship with some company.
IANAL, just currently wading through GDPR material.
As I see it the most relevant processing conditions for companies offering a service and storing / processing data without gaining explicit consent are likely to be
6(1)(b) - Processing is necessary for the performance of a contract with the data subject or to take steps to enter into a contract
6(1)(c) - Processing is necessary for compliance with a legal obligation
My understanding is that these are far from a blank cheque to store / manipulate arbitrary personal information. Specifically, the storage and use of data in question must be provably fundamental to either provision of the relevant service in (b), or meeting legal obligations in (c).
So yes, a company providing you a service will gain the right to store certain customer details demonstrably necessary to provide that service - say hosting your email. It won't however allow arbitrary use of such data to e.g. provide targeted advertising, since such use is not fundamentally required for performance of the service. This would require a specific opt-in (and from what I recall, a failure to opt-in cannot interfere with the provision of said service - not so clear on this however).
This is not even the fault of users. As a user, I want to get to the webpage I'm opening. I'll do whatever it takes to quickly reach there. I think "what's the worst they could have written". After all I haven't paid them anything. Expecting free users of Facebook to analyze a prompt on screen and legally consent to it is utter stupidity. Most people believe that clicking a checkbox isn't even legally binding.
In other words I know that clicking on a Facebook dialog box saying "you agree to give us 50% of your income" is meaningless and so I will click on it and use the website.
There is a big difference between cookie warning and GDPR. You could not op-out of cookie storage. You either used the service and accepted that they store cookies. Or you closed the tab and left.
GDPR prevents the companies from discontinuing service for users whom wish not to be tracked.
I see this as yet another tax on (European) startups who have to invest even more resources into regulatory compliance.
This prohibition of freely using all available data will create great arbitrage opportunity for the shadow economy, and will have a net negative effect on innovation.
I think prohibition has very bad side effects, and that MORE transparency is the way forward in politics, economy, and also society. This includes allowing businesses to use all the data they can get their hands on. People can produce infinitely more data than any google can realistically process.
I cannot understand why people who are otherwise for transparency and against prohibition are celebrating this as a big win against FB/AMZ/GOOG, as those players can easily shell out another $10M here and there to be compliant with this regulatory monster.
As an EU citizen I don't really care about a definition of innovation that involves monetizing my data against my will or knowledge. That's not innovation it's exploitation. So I won't miss your business.
There are all kinds of 'innovations' that don't involve collecting data about me, and the companies creating these kinds of products don't have to care one lick about the GDPR.
I'm not sure "transparency" means what you think it means.
Transparency is when powerful entities (companies, governmental bodies, elected officials…) disclose stuff about themselves. Allowing a business as big as Google or Facebook to use all their user's data as they see fit is not transparency, it's mass surveillance.
To be honest, the Facebook, Amazon, Apple, Microsoft and Google of this world aren't the one that have been leaking the most personal data. It's the smaller companies and startups that have been leaking personal data at an alarming rate. What we are beginning to see now is the regulatory backlash. Regulations are inconvenient, expensive, create barriers to entry (ask bankers!). But the current pace of gathering then leaking of personal information is just unacceptable and unsustainable. Between the two evils, I am not sure I prefer the current statu quo.
Really? I find this fascinating, because people who have had PII leaked have been doxxed, physically threatened, had money stolen, their identity cloned. Lots of real world harm.
Leaked info to governments, especially in places in the world where it can mean imprisonment or death is a real issue.
Processing of PII may gave people the willies, but being annoyed by targeted ads seems like #firstworldproblems compared to people who've experienced real attacks via PII leaks.
It also creates opportunity costs for improving human society. How many human diseases could be cured if "processed" PII health data, anonymized statistics or case studies, were used by researchers freely? How much additional burden does it incur if each time this data is transferred to a sub-processor everyone must re-opt-in again?
Would a world of perfect privacy be a utopia, or a nightmare?
Yeah but they have the resources to deal with these regulations. The parent comment complains about smaller companies having to deal with these regulations.
I completely agree. This is only hurting companies which business is to track their users. Which is why I personally have very little sympathy for people complaining that we're moving their cheese.
I don't think it is about ethics, it is about control. We as European businessmen were and still are unable to generate the same kind of innovation as the US, and now they try to solve the problem by pulling down the US on our level (at least the parts concerning the EU market).
I think this is the wrong approach, EU should try to make it easier for European businesses to compete with US-based ones.
But in the end we have another layer of bureaucracy on top of all the things a US startup has to worry about, and those mostly non-technical/non-innovative people want to be a part of the picture.
> We as European businessmen were and still are unable to generate the same kind of innovation as the US
Tracking people around the internet. To follow them everywhere they go and save their personal information, their political ideology, etc. It is not innovation, that's just stepping over personal rights.
> I don't think it is about ethics, it is about control.
Yes. About giving back control of citizen privacy to the citizens themselves. It is not the government that decides who can own your data or when to delete it. It is European citizens that decide individually who should have their data and whom can not.
> EU should try to make it easier for European businesses to compete with US-based ones.
If you give away freedom for economic gain, you don't deserve either one.
You make it sound as though the majority of Facebook users were making a conscious decision to trade personal information, whereas the usual thought process is "it's free! and everyone's doing it!".
Most people simply don't care. It constantly surprises me how paranoid people in the technology community are about their social interaction and browsing data.
Does that mean you avoid services like facebook or google? Many of those innovations are based on data collection and using that to make customer segments to sell targeted ads. Without that model you may be asked to pay a few dollars or see additional ads that are more likely to be less interesting.
>Does that mean you avoid services like facebook or google?
No. Can't. I can't do my job without Google and my social life wouldn't survive without Facebook (messenger, groups, events). Using these services is not voluntary at this point.
I'm afraid it's a small "plenty" but as a European data point I'm using an Android phone right now. I'm not logged in any Google service with the exception of Google Play. My mail is not on Gmail. I'm not logged in Facebook too and I don't have their app. However I do have their Pages and Messenger apps because of work. Maybe I could ditch Messenger because almost nobody I know uses it. Everybody is on WhatsApp (another Facebook property) and a few people are on Telegram (developers). Could I do without WhatsApp? Maybe but it would be hard. I'm almost not using Facebook anymore because it's became so boring after 10 years of the same stuff.
> I see this as yet another tax on (European) startups who have to invest even more resources into regulatory compliance.
This also affects American companies and the degree it affects you primarily depends on how much of your business model was depending on you doing nefarious things with customer data.
Could American companies simply state that European citizens are not allowed to use their service due to stringent compliance laws, and add a checkbox stating something like, 'My residence is not subject to EU jurisdiction?'
Yeah, yeah, "giving up lots of customers," but if you're a small startup it might be attractive to target a smaller problem space to start with. Also, everyone does this already with, "I am older than 13" boxes since it's broadly illegal to collect childrens' data; you probably wouldn't even have to do any verification as long as you don't wilfully stick your fingers in your ears.
> Could American companies simply state that European citizens are not allowed to use their service due to stringent compliance laws, and add a checkbox stating something like, 'My residence is not subject to EU jurisdiction?'
That works for B2C situations but it won't work if you have European companies as customers.
I don't have any operations in the EU. I don't collect revenue directly from EU residents using my services. I have no presence in Europe, but some Europeans do use my website (it's a free Internet).
What's the EU going to do? They have no jurisdiction over me.
It does in most situations. For instance Facebook is free but Facebook still sells into the European markets and through that it triggers. Through either VAT registrations, B2B sales or any PE registration GDPR becomes a requirement.
So, to clarify, if I self-host services (e.g. XMPP or mail or Git repos) and I give user accounts to my friends (whom, like me, live in the EU) for free, I'm not subject to GDPR because I'm not a business?
So.. Facebook is okay then if they move their data centers? For average users, I mean. They don't pay a cent to access facebook.com. I'm pretty sure that's not the intent of this legislation.
Of course this affects american companies, as the whole thing is primarily designed as a weapon for the EU against US dominance in the consumer space.
The big problem for the EU is that consumers actually choose the best product in a free market (the internet of free services), and they overwhelmingly decided to use the US-based options.
All the framing as "nefarious" is propaganda, consumers choose freely the option they value the most. If someone else comes along providing more value than google or facebook everyone would switch in an instant.
I suppose we fundamentally disagree on a few things here so I won't go into a discussion about the motivation of this law (which I as a EU citizen support). I do however want to leave a note on this quote here:
> All the framing as "nefarious" is propaganda, consumers choose freely the option they value the most.
Customers cannot choose freely. I'm a customer and I cannot chose certain products because they do not exist. Companies I never engage with are tracking my activities through tracking pixels and other things and because I never establish a business relationship with them, I cannot avoid that. This bill now forces a company I might do business with not do business with companies that do not permit me to get rid of my data.
I think this a good development because it finally makes certain backroom deals visible.
> Companies I never engage with are tracking my activities through tracking pixels and other things and because I never establish a business relationship with them, I cannot avoid that.
You can easily avoid being tracked by using an adblocker. Other websites only track you because they are business partners of the tracking companies, which provide a lot of value in terms of analytics for the business - free of charge.
> I think this a good development because it finally makes certain backroom deals visible.
As a German, I'd like to have more transparency into the backroom deals that are done in Berlin and Brussels.
But this won't happen unfortunately, and they'll try to regulate IT to death to the benefit of local corporations who failed again and again providing the consumer with as valuable producs as their US counterparts.
> You can easily avoid being tracked by using an adblocker.
Except not really. Plenty if tracking happens regardless based on fingerprinting. And even ignoring ads there are plenty of free services that after a while turn iut to be so shoddy that they lose the data i left on their services and provide no way for me to demand deletion.
I get a mail every other month that my email address and password where found in a data leak.
This regulation is a good first step of forcing companies to think about the consequences of having data.
> As a German, I'd like to have more transparency into the backroom deals that are done in Berlin and Brussels
Same. I want a lot of transparency including from my own government. I'm however going to accept any positive development and won't demand them to be in a certain order :P
There are a lot of people who are trying to shape the EU into a better institution. It's not perfect but it's a pretty good start.
> Of course this affects american companies, as the whole thing is primarily designed as a weapon for the EU against US dominance in the consumer space.
It really isn't.
The EU are global leaders in data protection, and it comes from a belief that the right to a private life is a fundamental human right.
Don't collect data you don't need. Don't collect data you don't have explicit consent (or a legitimate need) for. Don't use data collected for one purpose for another purpose.
That'll get you almost all of the way to complying.
If the start-up is to monetize due to lack of privacy laws, then this regulation is specifically meant to prevent such companies.
If they are the middle man with no dependence on private information, then yes it will cost them to be compliant but it won't break their business model.
> Nor can they deny access to their services to users who refuse to opt-in to tracking.[1]
Taken literally this means it's illegal to provide a service in exchange for tracking. Can someone elaborate on whether this is true and what else it applies to or what else other business models are made outright illegal?
Please excuse my excitement, but this really has made my day.
The "forced consent" so many apps and services use is scummy at best and I have no qualms about this tactic being denied at regulatory level.
Inb4 someone comes back with an argument about advertising/tracking being the "only" way some things can survive, then I won't miss them, and if they want options then they should allow for a reasonably priced usage fee, so that we can escape this "ads/tracking or nothing" business model.
> In order to ensure that consent is freely given, consent should not provide a valid legal
ground for the processing of personal data in a specific case where there is a clear
imbalance between the data subject and the controller, in particular where the controller is
a public authority and it is therefore unlikely that consent was freely given in all the
circumstances of that specific situation. Consent is presumed not to be freely given if it
does not allow separate consent to be given to different personal data processing operations
despite it being appropriate in the individual case, or if the performance of a contract,
including the provision of a service, is dependent on the consent despite such consent not
being necessary for such performance.
(My italics.) The second sentence seems clear: "consent is presumed not to be freely given" if the service could be provided without the consent. Which means consent cannot be traded in exchange for an unrelated service, like e.g. webmail.
I'm not sure what the relation between the two sentences is. Does the second one ("consent is presumed...") apply only to the cases addressed by the first, i.e. "where there is a clear imbalance"? Or are they independent?
In answer to your question the two items are independent. The first item gets at the idea that it's not possible to provide consent where the you really have no choice but to consent (i.e due to imbalance of power). One area of particular interest is in the field of employer-employee relations. Bundling up consent with a job offers means it's very difficult for the employer to refuse.
The other item gets at general service provision. You shouldn't make consent a condition of providing a service where that consent isn't necessary to provide the service.
So really the feeling at least consent-wise is that Google cannot attach consent to gather info for advertising with a service provision like search.
However they may instead seek to rely on a separate processing ground under art 6. The main one would be legitimate interests. There is some debate over the applicability if that ground is acceptable though.
/edit Plus the new ePrivacy Regulation brings additional considerations - in particular there's not currently any legitimate interests ground for processing
I was asking about the relation to the first sentence, the one starting with "In order to ensure that". That first sentence talks about "a specific case where there is a clear imbalance between the data subject and the controller, in particular where the controller is a public authority". Is the second sentence ("Consent is presumed not to be freely given...") limited to cases with such imbalance?
You can argue against the chosen font type, and it'd be still irrelevant. Most educated adults are able to read sentences longer than a few words. For the rest, trust your representatives.
It's not about people being able to read it. It's about trying to be intellectual. Long sentences are usually prone to interpretation. There is no value in making sentences this long. But that's just my opinion.
The legislation itself refers to this in article 7.4: "When assessing whether consent is freely given, utmost account shall be taken of whether, inter alia, the performance of a contract, including the provision of a service, is conditional on consent to the processing of personal data that is not necessary for the performance of that contract. "
I don't think it is forbidden, but you'd have a hard time explaining why consent is freely given despite the fact that access to a web site is denied if you don't give consent.
Visit turbotax.com to file a tax return, it asks for your bank account before you fill out the tax information. But it's not required, unless you're reporting interest earnings on that account.
On turbotax.com, after you fill out the tax form you have a refund of overpaid taxes. The site asks for your bank account in order to arrange for the refund to be deposited, or instead they can arrange for you to receive a paper check and you don't have to give your bank account.
I misworded these hypothetical scenarios, making them seem like they would actually happen today. I don't recall that it asks you for bank account numbers before you even begin. It probably doesn't. I have used turbotax for years and had pretty much the same bank accounts during that time. It does ask for the numbers at reasonable times. If you don't give them you can still use Turbotax to take care of business.
The author believes that users have little incentive to allow Google to provide personalized Google Search results.
I don't think any technical-oriented people in this thread would agree that they have "little incentive" to allow Google Search personalization. When I turn off Google Search personalization, I get inferior search results that are less likely to be what I was searching for.
If you don't want your results personalized, there is an option in the search results to turn personalization off.
The problem I have with this law is that Google will need to default to non-personalized results and then prompt users if they want personalization. Google probably doesn't want to increase UI friction, so they will most likely just disable personalization and not prompt to enable. This will result in less-engaged users and inferior search results for the average EU citizen.
Same here. I do most of my browsing in an incognito mode-like environment (so no persistent cookies or sign in to my Google account), and even when I am signed in I turned off the search history a long time ago.
The personalization is what made me switch to DDG. Google shows me too many results of what I searched for yesterday or the results of that similar search I just made. The more google tries to show results for what I think it's searching for instead of what I type into the search box the worse it gets for me.
DDG might sometimes show me some rust proofing results when I'm search for programming related things, but it least it doesn't ignore my refined searches.
I don't even use Google, I use my own searx instance instead. Completely unpersonalized, and completely private. And I feel that it's better this way, cause privacy does matter.
Using DuckDuckGo is fine for let say 98% of my searches (most of them probably IT related). For the rest, I go to Google or Bing or others. Is there Britannica on flash drives? I should add it to those 2%.
I'm going to read thoroughly through the terms and conditions and get a case going in European Courts when this comes into play, because you know for a FACT Google and Facebook will put in some vague term to let them collect data for "future" improvement of the service. Watch and see.
> “A purpose that is vague or general, such as for instance ‘Improving users’ experience’, ‘marketing purposes’, or ‘future research’ will – without further detail – usually not meet the criteria of being ‘specific’”
How do I do that if I don't have an account? Also, if I do have an account, will that delete all of the crap doubleclick and friends shared with each other while I was logged out?
Do you actually believe it will delete anything? You're wrong. NSA would still get the data, and so would Google for marketing purposes. It's just to calm privacy-aware, but not techsavvy users.
The NSA will collect it on the wire anyway. They have drilled in and tapped underground fibre optic cables on private networks behind HTTPS load balancers before, and simply captured what they want.
We [technology dept at a non-computer business in the UK] got the lecture about this at work. Turns out geeks are fans of this approach!
I've been using "GDPR hazard" as a useful way to kill bad ideas at work. "Sure you can do that! We just need you to confirm that your business unit accepts responsibility for this user-identifying data and ... oh, we can delete it? I'll do that now then."
We have lots of user-identified data, going back years. I can't see it as a bad thing for us to behave properly with regard to it, and to be required to do so.
What problems? So you can't collect all the data on your costumers and do with it as you see fit? And when it gets leaked just say: "Oops, sorry"? If for some startups taking private data seriously is a "problem", I want to see them burn in fire.
The more I think about it, the more I feel that if you're a startup, thinking about this from the get-go won't hurt you and won't really cost you.
What I mean by that is, it's easier to build your db and backups to comply with these laws before you have anything set in stone, than after you have any meaningful amount of personal data. Like, if you organise your backups and db to happily be able to handle removal of requested data before you accrue too much technical debt/inertia then you're going to be ahead of anyone who has to retrofit, which in many ways actually puts you at an advantage.
Also, I for one won't be mourning the loss of the business model that parasitically lives of exploiting user data.
So, how long until this one also also gets neutered when European governments realize they can't even bring their own websites into compliance with the new law? ;)
> So, how long until this one also also gets neutered when European governments realize they can't even bring their own websites into compliance with the new law? ;)
I have worked for 2 big tech companies in Europe.
And in both, there is a big effort to make sure that they are compliant with the legislation. I see everyone taking it seriously. Why do you think that it is going to fail?
Even the stupid, really really stupid, cookie warning was implemented everywhere. What does this different? (A part of being actually a good law that protects citizens from indiscriminate tracking).
They'd have a tough time doing so; this is a Regulation (as opposed to a Directive) which means the legislation is written by the EU and applies in all EU territories automatically without being transcribed into national legislation.
That means in order for it to be changed then the European Commission would have to propose a new law, and the European Parliament and Council of the European Union will have to agree to it. The former doesn't really care about whether national governments can get on with their jobs or not.
> "Nor can they deny access to their services to users who refuse to opt-in to tracking.[1]"
> "[1] Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) [2016] OJ L119/1. See Recital 42’s reference to “without detriment”, Recital 43’s discussion of “freely given” consent, and Article 7(2) prohibition of conditionality. See also the UK Information Commissioner’s Office’s draft guidance on consent, 31 March 2017, p. 21, which clearly prohibits so-called “tracking walls”."
What, in this regulation, prevents the company from denying users (who opt out) access to a service they provide free of charge or a downgraded experience? And how would a court measure the level of service?
The idea is that a user should not suffer detriment as a result of their decision to withhold consent. If you withhold the service as a result of a failure to provide consent then the user is likely to suffer a detriment.
However the above analysis really ignores free services where you are essentially paying with your data.
It is an ongoing question as to how to deal with these services in relation to GDPR.
To my mind privacy advocates ignore the fact that without giving companies the ability to use data, the services may not be available for free, a potential detriment in itself.
Im guessing they would force the company to make tracking not on by default but rather opt-in (lol)? But what's confusing is - if you clearly state how the data will be used upfront, do you need to offer an opt-out? What if your service simply doesn't work unless the data is used in a particular way?
Under the GDPR one way to use data for purposes unrelated to the underlying service provision is to look to obtain consent. That is not opt-out consent but clear informed consent through opt-in.
If your service won't work functionally without certain data then consent is not right ground of processing to rely on. There is a specific ground relating to processing necessary to provide a service.
If your service isn't financially viable because you can't use data to obtain revenue to support the underlying service provision then consent may not be a viable ground because of the above reasons.
The debate continues however on how to support free service provision outside of the confines of consent.
Interesting but complex. I could imagine there are a lot of issues on how to express your intent with the data.
Some uses of data might be ancillary to the direct goal of the user but still unexpectedly useful.
Interesting from the article: "purpose that is vague or general, such as for instance ‘Improving users’ experience’, ‘marketing purposes’, or ‘future research’ will – without further detail – usually not meet the criteria of being ‘specific’”.[3]"
Would one way "around" this be, "Welcome to ABC Service, it costs X€/month to use, or if you allow us to use your data to sell to our advertisers we will waive this fee."?
How might one make sure their accounts get classified those that fall under the scope of the GDPR regulation? Would it be sufficient to set your location to an EU country?
I have been thinking that the ad bubble is a problem for a while now. Ads are basically to encourage consumption, and the US economy has been debt based for decades now.
Throwaway since I don't want to involve my employer.
I actually work for a platform that is squarely in the GDPR crosshairs (digital marketing). There are a lot of things where our lawyers' perspective is different from what most people say here (I didn't talk directly to lawyers, but I presume product managers did).
- You don't have to comply in 2018, you have to show that you started seriously working on a solution, even if you're not fully prepared.
- You don't have to have automated processes for everything (e.g. delete from backups), it's actually perfectly reasonable to say "we'll process your request" and do it manually (ref: startups spending inordinate amounts of effort for GDPR compliance).
- Opt-in is not as "game changer" as suggested here, my understanding is that you can do implicit consent (notify the user about what you do, give them a link to take action; crucially, that link might even be the link to your privacy policy which contains the link to the opt-out interface... if I got this right - and I think that I did - this may not amount to much more than a slightly modified "this site uses cookies" thingy).
- Delete requests may be handled by "de-identification" (don't delete the data, delete the association with you).
- Related to that, while I don't have a definitive answer, I strongly suspect that GDPR only applies to information that can be positively associated with you (e.g. authenticated activity). I'm not obliged to show you anonymous browser activity/information that I've probabilistically associated with you, for the simple reason that I might be wrong and I might disclose sensitive information (think about girlfriend looking up "what does Amazon know about me" and finding up that "she is interested in an engagement ring" because you anonymously browsed from her computer, thus spoiling your surprise even though you were careful to delete your browser history/ browse anonymously. Yes, incognito mode doesn't necessarily help you - we do efforts to identify server-side the incognito sessions and de-link them from the probabilistic marketing profiles, because we don't want to negatively-surprise the customers; but I suspect not all players are that careful).
Overall... despite what many people think, I think big players are actually fairly careful/sensitive about your privacy (well, if we exclude Facebook here :D ). It's the startups that would concern me more... they have very little incentive to guard your data well, because there are so many OTHER reasons why they might fail, that "privacy disaster" is very low on their list of concerns.
Are all companies beholden to this or those with legal entities in Europe.
For instance, can a Chinese company with ZERO legal presence in the EU completely ignore these requirements? The internet has no real borders, after-all.
If you process the personal data people in the EU then you have to comply:
Article 3
Territorial scope
1. This Regulation applies to the processing of personal data in the context of the activities of an establishment of a controller or a processor in the Union, regardless of whether the processing takes place in the Union or not.
2. This Regulation applies to the processing of personal data of data subjects who are in the Union by a controller or processor not established in the Union, where the processing activities are related to:
(a) the offering of goods or services, irrespective of whether a payment of the data subject is required, to such data subjects in the Union; or
(b) the monitoring of their behaviour as far as their behaviour takes place within the Union.
3. This Regulation applies to the processing of personal data by a controller not established in the Union, but in a place where Member State law applies by virtue of public international law.
> can a Chinese company with ZERO legal presence in the EU completely ignore these requirements?
Same with an American company with no legal presence in the EU. However quite a few things require you to establish one. Likewise if you are engaging in B2B activities your European customers will ask you for it.
I'd imagine the host website needs to ask for the permission to share specific data with a third party named XYZ Startup, that will use that data for language/country segmentation or whatever is they do, and if the user denies that right then random ads are shown instead.
My first instinct was to be pretty happy about these laws, which surprised me quite a bit. I generally identify on the libertarian/pro-capitalist side of things and "as a general rule" subscribe to the belief that government regulations are often more problematic than problem solving[0]. So I had to take a moment to analyse why I felt this way.
Here's the problem as I see it: I know all of the things that are collected, how they're collected, what shady practices are used[1] and I'm completely aware that there is no anonymity left on the internet. The old "when the product is free, you're the product" isn't lost on me. In reading this, though, I was still finding myself a little outraged[2]. I look at it this way: if yesterday, we had an web with plain old "dumb advertising" techniques limited in sophistication in the manner of television advertising in the 90s, and today we ended up with this, there would be rioting (in the USA, anyway[3]). This didn't necessarily happen slowly but it happened gradually and quietly. I remember when Facebook announced that it was adding the ability to track you on other sites that you visited while you were logged out -- that was announced and it was met with criticism (briefly, though I quit the platform about a month later in a quiet, personal, revolt).
Here's the thing - if you ask an average non-technical individual if they understand that they're being tracked on the internet, they'll shrug and say "yes". If you dig a little deeper, you'll discover that they haven't the faintest idea how deeply they're being tracked and that they don't even have an analogy in their own lives to equate that tracking to. I couldn't come up with anything to describe the extent of tracking short of extremely lengthy explanations of what's being done and used[4].
And then there's me - I understand I'm being tracked and have basically chosen the head-in-the-sand approach. I use adblock, and a few extensions that supposedly "limit tracking" (doubtful) but I know they're worthless. Here's the thing, though, what choice do we have? And that's where I concluded how I was able to land in favor of some form of regulation on this behavior[5]. It is becoming increasingly impossible to avoid interacting with companies like Google and Facebook[6]. I look at it this way -- a company that becomes a monopoly in such an important industry can exert as much, if not more, control over the citizenry than their own government[7] but without the limitations imposed by democracy.
What should be done? I'm not sure. Self-regulation isn't working. I have zero faith in government crafting any kind of law related to technology that won't be some combination of horribly ineffective, worse than what we have today, utterly breaks something really important, or is used as a means to insert something horrible (watch them try to pop in a line-item around key-escrow). I'm kind of surprised to find myself thinking that approach that looks the best, out of the options, is probably forced-competition through breaking up the companies involved and I hate that idea in principal and in practice -- it's worked just-about as well in the past.
[0] I don't want this to devolve into a flame-war of whether regulating is "good or not", though I fear I may have just stoked that flame, I'm simply providing background for contextual purposes.
[1] I half- admire the creative uses of WebRTC with STUN on what are otherwise regarded as highly reputable major news sites. It's difficult for me to not see that practice as poking a hole in my firewall and I feel no less outrage when I see that happening than I do when a piece of malware does the same thing.
[2] Part of me had forgotten the idea that when GMail was "scanning e-mails for advertising purposes", they were scanning e-mails that were coming inbound from non-GMail users who couldn't have possibly consented to that. I'm sure there's a really good counter argument, but I'd have a hard time not feeling a little violated by that practice if I weren't a GMail user, already.
[3] Probably elsewhere, but my experience is that some European countries' citizens (particularly the UK, where I have the most experience outside of the US) are more tolerant to this sort of thing whereas when I was a child, you'd have seen people gathering in militias the moment the government tried to propose something like Real ID.
[4] I can only speak anecdotally since I had this conversation with family members who are non-technical and after about two hours, had them quite disgusted -- asking how is that legal ... and these are some of the most government-skeptical conservative people you'd ever meet.
[5] And I have zero faith in the US government being able to craft a law that works. Minimally the "they must still offer the service if the user opts out" will be removed, entirely, turning the "agree to be tracked" button into the moral equivalent of the "Cookie Warning" -- something you click because you have to. And philosophically, if we weren't talking about monopolies or near-monopolies here, I'd agree with that approach.
[6] Yes, DuckDuckGo is my default search engine, everywhere. And I've now trained myself to use the shortcut to get to google for the 60-70% of searches that DDG returns unworkable results. I think it's my search patterns, which tend to be very narrow in results, causing Bing/DDG to "broaden" and ignore terms (or when used with parameters, simply yield nothing). My parents (both retired) use DDG and rarely anything else since I switched all of their browsers around (they didn't even realize I had changed it -- they don't think of Google as a company, they think of search as something "the internet just has ..."). They are perfectly happy with it.
[7] Or can work in concert with it. Requirements to hand over Facebook credentials at the border are becoming common. I'm waiting for the day when I say "yeah, I don't use that" and end up back in a little room with an angry looking man asking me a bunch of (the same; slightly rephrased) questions and responding to them with the assumption that I'm lying (personal experience on that one; not fun). I mean, after all, I'm a programmer/live on the internet/et. al., surely I must use Facebook and I'm trying to hide something! /s
Just want to say that any emails I receive are my property. And I can grant consent to Google allowing them to mine these emails. Once you send an email, you are ceding control to the recipient.
While it's nice to think that's actually the case, the legal situation is almost certainly more nuanced. The thing that comes to mind for me is the implicit copyright protection.
I'm not sure if that would apply to a message I sent to you, but it may, and if that's the case, it's "my property", not yours. I'd be interested to know if any case law exists on this. Considering how often the DMCA is abused, I wouldn't be surprised if someone tried a DMCA takedown claiming copyright ownership on an embarrassing e-mail sent to someone and then subsequently posted online.
Site will not render properly with Google Analytics blocked. Web site renders with text on top of text. Grey on black text. Is this some fake site by a Google front intended to give this idea a bad reputation?
I use uMatrix on Chrome and have every non-pagefair.com script blocked, including google analytics. Not seeing any site rendering issues here. Perhaps it's your browser?
>unless the service they're looking for was illegal to start with.
Luckily, we will always know of everything that was/is legal now will be illegal in the future, and that people/companies throughout history will always submit to the costs of regulatory compliance of all governments in the world, no matter how burdensome they may be in specific instances.
It is pretty simple, only 3 levels (strikes for the fellow Americans):
Strike 1 - Stern warning letter
Strike 2 - 2% of your TOTAL GLOBAL REVENUE
Strike 3 - 4% of your TOTAL GLOBAL REVENUE (or 20mil EUR, whichever is higher)
And now you know why GDPR is a board level topic. Keep in mind that the EU/US Safe Harbor agreement got axed due to a lawsuit of a single student from Vienna against Facebook. So all you need is a single pissed off German customer you ignored when asking for their data report card and you're fucked.
For startups - GDPR is like Y2K at the time, a GOLDMINE. So much opportunity to sell solutions, from real to snake oil. GDPR compliance is already and will continue to trigger a massive wave of investment.
Enjoy :)