Hacker News new | past | comments | ask | show | jobs | submit login

It's frustrating to see how tech is consistently unable to implement systematic yet fair solutions that are possible in law. Not that people aren't genuinely trying - but the nature of automation and the design of platforms just seem at odds with working privacy.

German law, for example, generally protects people's privacy (i.e. you are not allowed to take a photo of people without permission, even in public). But it implements fine-grained exceptions for people of national importance - either globally (i.e. politicians who remain so) or temporarily (i.e. you may take photographs of such people for a time but no longer when they have ceased to be famous).




> you are not allowed to take a photo of people without permission, even in public

This is clearly not true though. People end up in other peoples’ photos all of the time without permission. So it sounds like a bullshit law that can be used to arbitrarily string people up rather than anything actually enforceable or reasonable.


I'm pretty sure the law is about publishing photos, not taking photos.

And in general the focus of the photo matters. You don't need permission from random folks who are not recognizable in the background.

Intent also matters. If I post a picture of my kid eating icecream and there's some random dude in the background no court will prosecute that.

If I add a caption identifying the person in the background things might look different.


what good is taking a photo if it can't be published?


And this is why I no longer ever want to be in photos. Not everything needs to be published to the world.


From a privacy standpoint AND the fact that everyone feels that everything must be uploaded to social media for narcissism points is one of my top worries.

My problem is that attempts to legally curb this have gross implications for the rights of people to keep informed about ongoings in the world. We have a right to be able to show corrupt cops as much as we do criminals and rioters. We have a right to showcase protests when going well or when they're destructive. We have a right to film stuff like Rittenhouse's situation and use that information to jail or free him of charges.

And any corporation will exploit this citizen desire for 'privacy' and use it as a wedge to strip us of our power as citizens, nevermind to allow some information that is in line with establishment values but disallow information that runs contrary to it's power.

You can't solve all these problems at once.

Censorship - both from the state and monopoly actors in the private sector, is incredibly worrisome.

What would be the worst case scenario is that our privacy is utterly gone AND censorship, citing privacy concerns, never really addresses privacy in a practical way but does help corporations and states manipulate the public discourse and undermine our very democracy.

As far as i'm concerned - even as a privacy advocate - our privacy is gone. We lost this battle. Theere's things we can do to maximize our privacy and make it better but the toothpaste is never going back into the tube.

But this is a moment where censorship is about to be normalized in an extreme way and this is a fight that shouldn't be lost.


I think you are wrong. I think there is a lot we can do with regard to privacy. We should not give up privacy protections because they could hypothetically be abused by people in power. Especially when the privacy protections have explicit exceptions for the cases you worry about.


there's no "hypothetical" to it.

if someone in power can bend the law to protect their power, they will.


People have been taking photos for over a century, and only a small percentage of them are ever published. Most are taken for what they've always been primarily taken for - memories.


Most of the times, the prohibition is not "taking the photo" but distributing it, which is much easier to enforce. At least it's how it is in Spain, where we have a similar law.


The law in question only concerns itself with publishing. It has accounted for the case you mention: It is explicitly allowed to publish photos that include people which have wandered into frame by chance and the main focus is some locality and not the person in question.


As an American, the EU seems like it's rife with bullshit laws that protect the gentry and aristocracy.


what's worse is that it always seems, online, that the EU citizenry think it's for their own good. Almost every time.

You see this out of Aussies, often, too.


You cannot take a picture of a person where the person is the subject of the photo. This is not the case when you take a picture of a statue and there's people in the background, for example.


Cropping the photo changes the subject. What a stupid criteria.


You can, as long you're not using it for publications. The usage on your own blog or Facebook account is allowed but the line is drawn at billboards, on TV or such (larger) channels. For those you must have an agreement.


This is not true and hasn't been for a while. It used to basically be a grey area but courts have repeatedly stated that just taking the picture was not allowed even if there wasn't ever any intention of publishing it anywhere.

You would have to justify it anyway and personal rights in Germany always trump the rights of others to create art, for example. You'll find many cases that are about surveillance cameras as well and even there's it's often in favor of the person not wanting to be on camera.

https://dejure.org/dienste/vernetzung/rechtsprechung?Text=VI...

http://lrbw.juris.de/cgi-bin/laender_rechtsprechung/document...


I think Facebook isn't covered because the uploader needs to give Facebook all rights over the image which they can't.


[flagged]


That’s not how it works in the UK.


From [1]:

> Not only is the writer guilty until proven innocent under English libel law

From [2]:

> English defamation law puts the burden of proof on the defendant, and does not require the plaintiff to prove falsehood

[1] https://www.theguardian.com/science/2009/nov/10/english-libe...

[2] https://en.m.wikipedia.org/wiki/English_defamation_law


The burden of proof being on the defendant is very different from a “presumption of guilt”.


Semantically, it may be so. In practice, I fail to see how there would be a difference. If you are not presumed innocent and the burden of proof lies with you is that not the same as a presumption of guilt?


What do you think about the French law that didn't pass, which would have made it illegal to upload photos of the police? Arguably, the actions of police officers are more of a public concern than those of national politicians, but they also fall into the category of non-famous people.


> you are not allowed to take a photo of people without permission, even in public

I'm curious how this is expected to work in practice. Is there a clear definition of "a photo of people"? Taking a close-up portrait of a random stranger in public would be one extreme (and I guess most people would agree it should require permission). Presumably a photo of my kids and their friends having fun at the park is still clearly "a photo of people", and therefore also requires permission.

But if I take a photo of a street scene in Berlin, do I have to get permission from every passer-by who happens to appear? How about a landscape photo where I only realise later, on reviewing the picture, that there were a couple of hikers on a distant hill? To my mind, that's not "a photo of people", yet there are people in the photo.

Somewhere between the extremes, it seems to me there's an awfully wide grey area.


Edited to preface: I'm not a lawyer, I'm mostly paraphrasing and translating the source below.

There are three cases where you can publish[1] a photo without consent:

- Person of special public interest (literally: person of contemporary history, this is an idiomatic term and sounds less strange in German), which means you can take a picture of a politician holding a speech or an artist doing a public performance or a CEO addressing employees. Important exception for professionals, but there are various rules one needs to be aware of, for instance you probably cannot take a photo of the same CEO buying new shoes or eating dinner at a restaurant.

- The photograph does not primarily depict the person. Your landscape photo would be fine, for example, and your street scene may or may not be. Apparently the test criterion is: is the picture materially changed if the person is removed? So a photo of a beach landscape is permissible, but not so much if there's people bathing in the foreground.

- The photo is of a public gathering: a public concert, a political rally, though this only applies if it's a photo of the gathering as a whole, and not specifically an individual in it.

So, yes, in the end it is a bit of a case of "I can't define it, but I know it when I see it". In practice, you just err on the side of caution, and it works out fine. E.g. I don't really worry about if when I take a photo of a friend outside and there's a few people in the background (due to rule #2).

Source (German): https://www.medienrecht-urheberrecht.de/fotorecht-bildrecht/...

[1] I'm not sure how the law handles the case of merely taking a photo and never publishing it, nor what exactly constitutes publishing, e.g. presumably showing holiday snaps at a family gathering does not constitute publishing


publishing usualy mean "making public" so showing photos in a private context is obviously not public.


That obviously makes sense to me, but IANAL, so I'll refrain from making any definitive judgement. A huge percentage of pictures are immediately shared with a third party (by being uploaded to the cloud).

Beyond that, while the rules outlined above deal with publishing, there are also rules for merely taking pictures. The exact rules for that aren't 100% clear to me[1], but if it's permissible to publish them, it must also be permissible to take the photo in the first place.

[1] https://de.wikipedia.org/wiki/Recht_am_eigenen_Bild_(Deutsch... it seems complicated


Yeah, who knows. It's arbitrary and probably depends mostly on how long it's been since the judge has had a meal* when they are making their decision. For some bizarre reason things like this are thought of as very enlightened when done in Europe, but would be seen as authoritarian and dystopian if done anywhere else.

* https://www.theguardian.com/law/2011/apr/11/judges-lenient-b...


Just because there might be a grey area doesn't mean it's arbitrary.

99% of cases are going to be pretty clear.


> I'm curious how this is expected to work in practice.

Most of the time, the prohibition is on distribution. And it's only applicable when the people in the picture can be reasonably identified. A street scene in Berlin where you barely see people's faces or two hikers on a distant hill wouldn't be problematic.


> it's only applicable when the people in the picture can be reasonably identified.

I think the law needs to be updated, since face recognition software is now available.


As many people have pointed out, the primary protection in this law is on publishing.

However, there are also penal codes preventing the mere taking of pictures where intimate privacy is affected, i.e. in intimate situations, in your own home, or when you are helpless (i.e. when injured in public).


Exactly. If I take a picture of you standing naked in front of your first floor window, it's totally acceptable, you should have expected to be noticed. However if I use a teleobjective to photograph you getting naked out of the shower on the twelfth floor, that's an invasion of your privacy - because your expectations were different.


Unable or unwilling? If their network effects endow them with an impenetrable moat, why would they voluntarily spend lots of money to address a problem that harms only a minority of customers and isn't substantial enough to drive people off the platform?


it's impossible to make a fair censorship rule.

>German law, for example, generally protects people's privacy (i.e. you are not allowed to take a photo of people without permission, even in public).

So everything from the videos showing Kyle Rittenhouse doing nothing illegal, to Chauvin videos providing evidence that he killed George Floyd wouldn't be allowed in Germany. Got it.


Yeah, Germany has the best laws regarding speech: https://www.courthousenews.com/german-nationalist-wins-injun...


On the other hand, there are numerous cases where Facebook has been compelled by German courts to restore posts they have deleted under their content policy to protect the user's free speech rights. Something like that is unthinkable in the US.


Calling her a Nazi or fascist would have been fine as that's understood as an expression of opinion, calling her a swine is clearly an insult and insulting someone is a crime under civil law (not sure how this translates to the US but it's not something the police will arrest you for but something the insulted party can take you to court over).

There's a common misconception in Germany that there is a law about "Beamtenbeleidigung" (insult of a public official) but the truth is that public officials have no special protection in that regard per se, it's just usually easier for them to sue people (esp. when you insult a police officer as they're literally the police). There are some caveats when insulting government officials, especially foreign government officials, but insulting a Nazi politician on Facebook is not any different from insulting a celebrity on Facebook.

The problem with social media is that it can be difficult to find out who to sue and compelling a foreign company to release the likely incomplete information they have on a user in an attempt to identify them isn't great. I'm not saying the law in question (NetzDG, requiring social media companies to block such content in Germany) is a good solution to this problem but it's certainly not the worst.

If anything, the problem with NetzDG is that it allows users who are able to avoid revealing their identity unambiguously to engage in Holocaust denialism or Volksverhetzung[0] and have their posts still be visible with a proxy if blocked (allowing Nazi groups to organize and operate hidden in plain sight) or when the content is deleted for those crimes to just be swept under the rug and make it harder to report to the actual authorities rather than the social media company. Social media companies like Twitter have also made it nearly impossible to report ToS violations in Germany as the report button immediately funnels you into NetzDG technicalities users aren't meant to understand like which specific law you believe the offending post violates.

[0]: https://en.wikipedia.org/wiki/Volksverhetzung


[flagged]


I say this as an Austrian... the fact that Nazi ideology and symbols are forbidden in our country is a godsend. We have a small but persistent problem with militant far-right Nazi sympathisers, and the "Verbotsgesetz" is an invaluable tool in dealing with them.

The law is extremely clear, nobody breaks it accidentally, and it makes sure that dangerous extremists are taken seriously by the police.

A couple of times a year the police discovers illegal weapon and ammo stashes when investigating Neonazis. These guys are dangerous, and pretending it's just about "free speech" is stupid.


So impeding the speech of two people is a better outcome than impeding the speech of none? I don't get it.

>the situation in America where everyone is a fucking edgelord in their spare time

So don't read them, as easy as that.


> So don't read them, as easy as that.

Hate speech and radicalizing speech isn't meant for those that aren't reading or listening to that speech, but rather to motivate those who do listen to act out the things that the speakers are saying.

The speakers hide behind "I didn't do anything, I just said something" and count on those who take their words into their heart and convert them into action. This is the danger of hate speech. It's not enough for good people to just ignore. It requires more effort to prevent the talking from being doing. If the term "hate speech" doesn't sit well, I prefer to use the term "rhetorical violence". Basically, rhetorical violence is speech using the imagery and terminology of violence intended to inspire violent thoughts in others.

The video posted below by another commenter shows how radicalizing speech is used to motivate others to commit acts that the speaker themselves would not commit or would claim not to support. In essence, the speakers are claiming the rights to rhetorical violence while being disconnected from actual violence that the speech might incite, inspire, or support.


We already have laws against violence.


We also have laws against threatening violence.


Reality doesn't fit so neatly into these categories that you're trying to construct, where speech is perfectly harmless unless it's direct incitement to violence and then suddenly it's harmful. That might be how the legal system works but it's not how reality works.

Motivating radicals and spewing racism might not be direct incitement to violence, but history shows that it can have significant negative consequences. The causal pathway is usually non-linear and hard to attribute. But, behind many genocides is racial hate speech that's been allowed to fester for years. Behind many lone wolf terrorist attacks is propaganda, even if nobody directly incited it.

I'm not arguing for or against any specific hate speech law here. Just trying to point out there's a grey area that your categorical thinking isn't good at addressing.


Wanna say, vadfa's two answers:

>We already have laws against violence.

>So don't read them, as easy as that.

are both "demand-side" solutions, which conservatives are well aware don't work when there's people dealing poison in the street.

Still, rexreed, I'll always fight for free speech, even when the people exercising it are abhorrent. And even knowing they'll take advantage of that to the fullest effect they can. Because if we really restrict it, the worst possible people will take control of who gets to say what. And it won't be the people we'd like to be making that decision. Every encroachment on free speech is like feeding steroids to the nazis.


Free speech and fighting rhetorical violence are not mutually exclusive. There are ways to reduce the visibility and spread of rhetorical violence without imposing on the rights of everyone to speak.

Let's use another mental construct if this is helpful. Imagine at your place of work, one person every day comes into the office, points at you and says "I hate this guy. Someone should beat the crap out of them". This person then posts messages on the company chat about how much of a terrible person you are, spreading all sorts of half-lies and untruths. This person goes as far as to put a message on the bulletin board in the cafeteria saying that you are a rotten person and someone should slash your tires or make your life a living hell.

One day you come to work. Your tires are slashed. Someone has trashed your desk. When you leave work at night someone assaults you, punches you and throws you on the ground. You can't see who it is.

You can point your finger and say "this person has been verbally harassing me". Would it be right for the company to say "any speech is allowed, therefore, this person has the right to continue that speech. Any actions are the fault of the perpetrator and not the speaker."

How long would you be willing to put up with that and defend that right even though it is causing you direct harm? There are indeed laws against violent and harassing speech, even though the words themselves aren't harm because of the direct harm that can be linked. I agree that the line between annoying and controversial speech and overtly violent speech is not well defined, but the lack of a well defined boundary does not mean that there is no boundary at all. Clearly some things are beyond the pale.

Now the company can't tell the verbal harasser that they are not allowed to think or express their abhorrent views. That harasser, as abhorrent as their views are, are using protected free speech. But the company can tell the harasser they are not allowed to communicate those views on company grounds, in company chat rooms, in the company cafeteria, or in any capacity as a company employee. Basically, the company can impose limits on the spread of those views. And in the vast majority of cases, it's imposing limits on the spread of views that acts to dampen actual violence.


Definitely. I think the main problem modern society (post-internet) is having, is that people have conflated the right to speak with the right (or the recent privilege they've been granted) to be heard, and assumed that if you have one you should automatically have the other. It's never been so.

[edit] since you updated... so, it's often been said that "speech" for nazis is a boot to the face, and that's all the words they need. And the truth is that if violence takes over it eradicates speech. A societal commitment to free speech is what allows the victims of threats and harassment and violence to speak out where they would otherwise be afraid to - especially if the intimidating environment is not just one company, but society as a whole. And this is why it's very dangerous, and can possibly breed more violence, to ever say that speech==violence [edit2: people reading "revolutionary books" in prison can be equated with violence by the prison guards]. Yes, incitement is beyond the pale, but in the example you just delineated it's very possible to separate incitement from opinion. Remove "someone should..." &c.

Now imagine you're born and everyone you're related to is accused of horrible crimes against humanity, controlling the media, stealing from honest people and drinking babies' blood, and your grandparents' families were murdered by people who said the same thing, and you hear people saying stuff like that every day which is clearly intended to incite people to, you know, kill you. And then imagine coming to the point where you know that preserving their right to say whatever they want about you, however disgusting and evil, is the only chance you have to preserve your own rights as an individual. If you can put yourself there, mazel tov, you're Jewish.

And it's natural to wonder whether all that free speech is a terrible idea, so, like all important things it's open to debate. But it's why my grandparents came to America, and they wouldn't like the idea of a law against nazi speech any more than I do.

Twitter, of course, is a whole other story. Private enterprise and should be held accountable for every word on their platform. They should banhammer anyone they feel like.


100% this is the case. People are conflating the rights of those who have rhetorically violent speech to express those views with the supposed "right" of those violent speakers to use a given platform to spread that rhetorical violence. From the perspective of the social media outlets: I can't stop you from expressing your abhorrent views, if it's protected speech, but you do not have the right to use my platform or my loudspeaker or my venue or my publication or my social network to spread that rhetorical violence. The rhetoric might or might not be protected, but the platforms have no obligation to spread that rhetoric.

Long story short, your speech might or might not be a protected right, but your use of a given platform to spread that speech, and any obligations to spread that speech or provide visibility or virality to that speech is not a protected right. One cannot be arrested or detained or sued for simply expressing their opinions, and I agree that even that abhorrent speech is protected. However, a platform can opt to not publish hateful speech, pull the plug on the loudspeakers, prevent the use of their venues, and refuse to promote abhorrent speech. The most effective means for combating hate speech and rhetorical violence is not to suppress the speech, but rather to prevent its spread. In this way the rights are protected without increasing the harm.

You're right that not too long ago, those with rhetorically violent speech would have little access to mass media. They would have to literally stand on street corners with megaphones to shout their messages or print their own publications and then find ways to distribute those publications. Nowadays, everyone has instant and immediate access to mass media whose viewership, ease of spread, and total audience size rivals even the very largest of mass media publications 100 years ago. In the current age where a single viral Tiktok or Tweet can get millions of impressions, the power (and responsibility) of media companies is far greater than ever.


> They would have to literally stand on street corners with megaphones to shout their messages

This is the primary problem. "Speaker's corner" has always been the place for insane people to shout. Social media has elevated it to the mainstream. (And made a handsome profit).

Insanity is contagious. What I mean by that is: Mental instability, FUD, conspiracy theories, propaganda, and simple sociopathic narcissism are viruses. No one who has witnessed 2016-present could doubt that. But anyone who knows about 1932-1945 already understood it.

Individuals with violent and malevolvent personality disorders are very capable of spreading their mentality to others. All they need is a channel. Radio and television, in the wrong hands, were used to mobilize millions of people to their deaths. And suddenly we open a channel for the craziest of crazies, and think their mental afflictions won't affect billions of people around the world?

There is no right to be heard. Over all of human history, being heard by the masses has been an extremely rare privilege. Creating a technology that allows crazy people to be heard is frankly the definition of insanity breeding more insanity. Speech is not the problem. Proliferation is.


>Insanity is contagious. What I mean by that is: Mental instability, FUD, conspiracy theories, propaganda, and simple sociopathic narcissism are viruses. No one who has witnessed 2016-present could doubt that. But anyone who knows about 1932-1945 already understood it.

What an implicitly condescending, shitty thing you state so casually: That obviously the only reason Trump won in 2016 is because he "spread" his sociopathic narcissism to others, who also likely happen to be mentally unstable and possibly conspiracy nuts. No chance that maybe, just maybe, millions of people voted for him on their own no less rational volition than those who voted for a frankly terrible democrat candidate like Clinton. No, the Trump voters were just mentally infected, weak minded idiots I suppose?


I'm not talking about everyone who voted for Trump. His is not the only or even the most important species of insanity that's been allowed to spread like a virus. Yes, people have all sorts of reasons for voting in populist demagogues without needing to specifically buy their insanity wholesale. Trump's madness is a symptom and a vector, a stop on the road between Alex Jones shouting on a corner and Adolf Hitler in a bunker. The door just keeps opening wider, though.


Enough with the absurd hyperbole already. Trump's presidency was neither an Alex Jones conspiracy nutfest or an Adolf Hitler madhouse of dictatorship. It was mostly mediocre but hardly worse than many previous presidency. Possibly better than some even. I'm no fan of that guy in so many ways, but he lived up to very few of the insane worst expectations that were created when he just entered the office. The world certainly didn't go to hell because of it. If anyone promoted idiotic unfounded conspiracies during his presidencies, it was the media endlessly harping about Russian collusion in his victory, but never being quite able to provide solid evidence of a single aspect of that particular conspiracy theory. Or the obsessive fixation on the new boogeyman of "misinformation", which suddenly has become a global problem according to many media sources and politicans because, oh god forbid, a candidate that they didn't give their formal benediction to happened to win a major election.


> So impeding the speech of two people is a better outcome than impeding the speech of none? I don't get it.

I mean, it's not a better strategy and it's not right - what I'm trying to say is that impeding one person's speech leads to impeding another person's speech, and that's how you end up with totalitarianism, regardless of who's in control.

The trouble is that whoever speaks loudest never respects the mechanism that allowed them to speak in the first place, or extends that right to anyone else.

So as to what leads to a better outcome, I'd say the results aren't in yet.


> So don't read them, as easy as that.

If only I knew the content of something before I read it. I would have to limit my internet use to Signal conversations with my dog to avoid most of tech’s poison machine.


That is a good idea, it is what I did. I don't visit any social networks, I don't read the news, and I stop talking to those who send me information that I'm not interested in.


Yet here you are commenting along everybody else on HN. Unfortunately real world situations are not that black and white so they cannot be solved with such black and white solutions...


I'm just going to leave this here: https://www.youtube.com/watch?v=P55t6eryY3g




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: