> True enough, but assuming they decided to pull the stunt before the new laws came into effect, which isn't until the middle of next year, the real question is whether public opinion could be shifted in so little time.
Interesting idea. I did not think about this, mostly probably because I would never do this if I was Facebook because there is a very real risk of it back-firing. Many people here (I'm living in Germany), especially among the politically active, are in a kind of love-hate relationship with Facebook, and a clear attempt of blackmailing an entire continents' population in order to force political action in favor of an absurdly rich, multinational corporation could very well kill off whatever positive attitude there is towards Facebook in particular.
As of competition having to comply with the privacy law: of course it would have to. But I also assume that this is not impossible at all, it is just inconvenient and costly, especially if you have a huge legacy system built under the assumption that you can do practically everything with the data of your users. If you design your system in compliance with the data protection law in the first place, this gets considerably easier. Money would not be a problem at all: there are more than enough investors in Europe who would love to throw money at an attempt to create a second multi-billion-dollar money-printing machine in a market that is currently assumed to have a very high barrier of entry (but exactly that would change if Facebook gave up on Europe).
I also assume that it will not be impossible at all for Facebook to comply with these regulations, just pretty inconvenient. Under this assumption, any refusal to comply must automatically be motivated by a desire to maximize profits and minimize political influence on the platform, not by a sheer struggle for survival. Facebook's PR department might try to spin this into a different story, however...
I also assume that it will not be impossible at all for Facebook to comply with these regulations, just pretty inconvenient.
This is a common assumption, and it might prove to be correct, but I'm unwilling to accept it as axiomatic.
Fundamentally, just looking at the right of a data subject to withdraw consent, it would mean Facebook needed to track every piece of data that could conceivably be tied back to an identifiable person throughout its entire organisation. That's not just their status updates or that time a friend tagged them in a photo. It's every photo in Facebook's entire database that ever included a recognisable image of them, tagged or not. It's every line in a log file that was saved by an engineer investigating a server glitch that relates to any activity that user took. It's everyone who uploads their contacts to find friends and has one of those data subjects in their contact list.
Now, I'm not saying I think Facebook should necessarily be able to do all of the above. In particular, I have often questioned their hoarding of data from things like contact details and photos that will inevitably include other people who may not have chosen to use Facebook or give their consent.
But I am questioning whether it is practically viable, even for an organisation with Facebook's scale and resources, to follow the letter of this law and still operate at all while continuing to provide similar services, if a few people decide to make a point and explicitly deny consent to hold any data about them, even if such data was supplied by other people. There are practical, ethical and legal issues here about third parties and automated systems that we have barely begun to explore, and we're talking about them at a scale where businesses like Facebook and Google have already had to invent new techniques and strategies for organising data just to cope with what they already do.
None of this has even touched yet on whether Facebook would still have a viable commercial model if users have a right to opt out of processing their data for purposes such as advertising but Facebook isn't allowed to deny them service in return, which is another interpretation I've seen talked about a lot (on the basis that an opt-out that stops you using something independent as well isn't a true opt-out and so wouldn't count). So far, I haven't studied the GDPR and informed reviews of it enough to reach any firm conclusions or opinions on that side of things, but again there are surely issues about the obligations of an organisation that offers a useful service but relies on advertising to fund it that go far deeper than just Facebook and the GDPR that haven't really been explored up to this point.
As surprising as it may seem given my comments in this discussion, I'm actually a pretty firm believer in stronger privacy rights and a confirmed sceptic when it comes to the big data hoarders like Facebook and Google. But I'm also someone who runs businesses and has first-hand experience of what happens when the EU's non-technical legislators meddle in technical issues they don't fully understand, often missing even the blindingly obvious consequences, never mind the more subtle and/or long-term implications. So I don't think we should dive into changes like this without considerable thought, and contrary to what various officials from the EU and the national data protection authorities like to say, I don't believe for a moment that this sort of change is a small, incremental development of the existing privacy frameworks we already operate under.
> It's every photo in Facebook's entire database that ever included a recognisable image of them, tagged or not.
That is from my understanding wrong. In fact Facebook would not be permitted to establish the link. Just you appearing in the picture but it not being your picture does not qualify for personal data.
Any information that relates to an identified or identifiable natural person is personal data for these purposes. This remains essentially the same, except for a slightly broader specification of what constitutes identifiability, under the GDPR as under the current EU framework. In particular, under the GDPR, identifiers explicitly include factors specific to a person's physical or genetic identity.
In short, if you're in a photo and it's recognisably you, it's personal data.
I don't understand what you mean by "fundamentally scoped" in this context, so perhaps we're talking at cross-purposes here.
However, the wording I used before was actually taken directly from the GDPR itself[1].
Moreover, the interpretation that an otherwise unidentified image of someone may become personal data even if collected incidentally such as in a photo taken for other purposes or on CCTV is supported by among others the ICO (the UK's data protection regulator). There are a few specific examples in some of their published guidance [2,3,4].
Unless your argument is that Facebook isn't processing those photos in any way that would cause that interpretation to apply? In that case, I can maybe see how your argument works, but given what we know of Facebook processing uploaded photos just from the features they offer publicly, it's hard to imagine how their processing could possibly not be sufficient for all photos they hold to be treated as personal data.
> I don't understand what you mean by "fundamentally scoped" in this context, so perhaps we're talking at cross-purposes here.
If you have a social network and you have a user A and a user B. Each of them uploads a picture in private showing both of the users on the image. If user A deletes his or her account the picture in user B's account is not to be deleted even though it contains a picture of user A.
But if user A withdraws their consent for the social network to process personal data about them, user B's photo is still personal data about user A if the social network knows or could know that user A is in it.
As I mentioned before, modern technologies raise complex issues about third parties that we have barely begun to explore. SOP at social networks is very much to get people to provide information about not only themselves but also other people they know, and that's a minefield if those other people aren't happy about it. Obviously you can't just say social networks can do what they want if someone else provided the personal data, because that undermines the entire principle of data protection and privacy. But equally, if you require explicit consent from everyone for everything, you create a huge burden that might make the whole idea unworkable or at least remove a lot of the value these services offer to their users when maybe a lot of people wouldn't have a problem with, say, a friend tagging them in a photo anyway.
As things stand, taking the GDPR at face value, I don't see how it would be legal for a social network to retain any photo in which someone is identifiable if that person doesn't consent, unless that social network also took rather dramatic steps like avoiding any sort of automated processing and analysis of photos that might identify people in them, as well as removing features like letting a user tag someone who isn't a member of the social network.
> But if user A withdraws their consent for the social network to process personal data about them, user B's photo is still personal data about user A if the social network knows or could know that user A is in it.
Except that is not included in this. You are in fact not even supposed to retain data to identify a user after their account has been used to match them on other data.
I don't know the law by heart right now but I had discussions even a year ago with people consulting on this about this very topic what to do for such cases.
This law was not drafted in a vacuum where nobody looked at real world situations.
This law was not drafted in a vacuum where nobody looked at real world situations.
Sadly, given that we're talking about an EU law in a technical field, I suspect that what you just wrote is actually quite close to what did happen. That would be consistent with other recent EU rules affecting creative and technical businesses. In some cases, even senior EU and national government figures have admitted that those involved hadn't seen major unintended consequences coming at all, at least not until it was too late in the process to avoid them.
Essentially, the EU often exhibits good intentions and its laws might be made with laudable overall goals, but it frequently produces poor implementations that haven't been thought through in enough detail before legislating. So far the GDPR is shaping up to be another textbook example, with perhaps a side order of political football so the EU can beat up big US tech businesses because the EU's business environment hasn't resulted in creating equivalent services of its own.
This doesn't seem healthy for either our tech industry or our society as a whole to me. I'm actually a rather strong advocate of privacy online, but rules intended to protect it do need to be reasonably clear and practical or they're not going to be worth very much.
The track record of EU legislation is generally rather good and based on feedback I have seen that in particular American companies have on GDPR it's already succeeding in what it's there to do: raise awareness of data not being an asset but a liability.
We will see soon enough how this plays out. From where I'm standing I'm very welcoming of this development because it's the first time I see an actual attempt of companies doing something that is in the interest of the customer when it comes to data.
Interesting idea. I did not think about this, mostly probably because I would never do this if I was Facebook because there is a very real risk of it back-firing. Many people here (I'm living in Germany), especially among the politically active, are in a kind of love-hate relationship with Facebook, and a clear attempt of blackmailing an entire continents' population in order to force political action in favor of an absurdly rich, multinational corporation could very well kill off whatever positive attitude there is towards Facebook in particular.
As of competition having to comply with the privacy law: of course it would have to. But I also assume that this is not impossible at all, it is just inconvenient and costly, especially if you have a huge legacy system built under the assumption that you can do practically everything with the data of your users. If you design your system in compliance with the data protection law in the first place, this gets considerably easier. Money would not be a problem at all: there are more than enough investors in Europe who would love to throw money at an attempt to create a second multi-billion-dollar money-printing machine in a market that is currently assumed to have a very high barrier of entry (but exactly that would change if Facebook gave up on Europe).
I also assume that it will not be impossible at all for Facebook to comply with these regulations, just pretty inconvenient. Under this assumption, any refusal to comply must automatically be motivated by a desire to maximize profits and minimize political influence on the platform, not by a sheer struggle for survival. Facebook's PR department might try to spin this into a different story, however...