Hacker News new | past | comments | ask | show | jobs | submit login
Banning facial recognition is missing the point (schneier.com)
361 points by acmegeek on Jan 23, 2020 | hide | past | favorite | 162 comments



While he is right I think Bruce is also missing the point here himself: he states that this law is the wrong way to fight surveillance – but that is not the stated goal of the law.

The goal of the law is to prevent the development of a technical reality in Europe, to which a Jurisdiction can only passively react. The technical space moves so fast at times, that reality has been made before the law can even start to think about what is okay and what isn't.

This time they wanted to say: "Yeah it is a shiny new thing, that would definitly come if the law stayed as it was, but it is such a hairy ball of mud that we ban it till we figured out what is allowed." Or phrased differently: it is so obvious for them that this is prone to abuse, they ban it first and then figure out how to deal with it in an adequate fashion.

So Bruce' idea that the goal was to fight surveillance is a tad bit to optimistic in my eyes..


>The technical space moves so fast at times, that reality has been made before the law can even start to think about what is okay and what isn't.

In general I think this is exactly how it should work. You cannot stop all technological developments early on and make rules for imagined problems that may or may not arise.

Imagine it's 1995. The Web gets banned for five years so that a government committee can meet to figure out what could possibly go wrong with that technology.

There are always groups that stand to lose when new technologies are introduced. And these groups are inevitably very powerful at the beginning of any new development. You can be pretty sure that nothing would go ahead without great resistance.

That said, we have to find a balance. Of course there are technologies that are so dangerous that they must be regulated fairly early on. Making some rules for face recognition and similar technologies seems entirely sensible to me.

But as a general principle I think problems should be solved when they arise. Solving imagined problems before they arise is impossible and prone to misuse of power and influence.


> Imagine it's 1995. The Web gets banned for five years so that a government committee can meet to figure out what could possibly go wrong with that technology.

Fortunately we already know what can go wrong with facial recognition technology. We have China, and we have American and Israeli and various Asian marketing startups inventing abuse after abuse after abuse. So there is already a test bed in place; it doesn't have to be the entire world.

As a general principle, it may not be a bad idea itself. Let a part of the world play with the new toys and other part observe, after that either the tech will be exported to holdouts (the control group, if you like), or the regulations will be exported to the experimenters.


But the context in those countries is too different to ours to come to any meaningful conclusion about what should be allowed "here" (I'm assuming UK + US mainly).


The vector of anglo world isn't too promising right now either.


I don't remember stating otherwise.


> Imagine it's 1995. The Web gets banned for five years so that a government committee can meet to figure out what could possibly go wrong with that technology.

This analogy doesn't work because the web is something I can choose to use or not, whereas facial recognition isn't.

> But as a general principle I think problems should be solved when they arise. Solving imagined problems before they arise is impossible and prone to misuse of power and influence.

The problems aren't imagined; they're very real and already present in other tech.

Also, "figure out problems when they arise" is rather naïve; history has shown that corporations will do anything to make a buck and will lie and bribe to get away with it. So this strategy didn't always work so well with past.


>This analogy doesn't work because the web is something I can choose to use or not

You cannot choose whether or not others use Facebook and get manipulated by micro targeted ads into making stupid political decisions. And yet you are affected by those decisions.

You cannot choose whether or not all the services you depend on are only available online while the physical branches in your neighborhood are closing.

Most technologies have systemic effects that individuals cannot choose freely. Isn't that exactly why some people argue in favour of "let's stop and think before we jump"?

>Also, "figure out problems when they arise" is rather naïve

On the contrary, I think it's very naive to think we could just stop the clock and think very hard until we have figured out all the consequences of the technologies we are introducing. We have to live with the possibility of bad surprises and try to deal with them as good as we can.

>history has shown that corporations will do anything to make a buck and will lie and bribe to get away with it.

But that is not just true for change. It's also true for blocking change. Would oil companies not go to great lengths to stop us transitioning to renewable energy for instance?

Just look at what's happening in the US right now. You have an administration that thinks climate change is a hoax and that we should dig up more coal and pump more oil and gas. And yet the transition to renewables is in full swing.

Would you really want this administration to be able to press the stop button on all new renewables development based on some fanciful claims about potential future risks?

>The problems aren't imagined;

Which problems? I was talking about a general principle not about face recognition. I did say myself that I think face recognition and similar tech needs to be regulated in some shape or form.


The difference here is that the EU bans something here that some of its governments actually want (e.g. the german minister of internal affairs said he'd like to role out surveillance that can do that).

I understand that things that are new should not be banned per default (as they aren't) — however if it is quite obvious beforehand they might hurt civil rights I think we can make an exception.


As opposed to the Web, facial recognition barely has any positive use for an individual.


> The technical space moves so fast at times, that reality has been made before the law can even start to think about what is okay and what isn't.

I think lawmakers should watch more dystopian science fiction movies.


I think it's hard for most people to tell apart cheap nonsense sci-fi for one in which the vision is plausible. E.g. there are many reasons to be concerned about human-level AI, but Terminator robots is not one of them.

Personally, the best approach I've found so far is to assume that everything that is technologically and economically feasible will be attempted by someone, no matter how perverted or evil it is. If you can imagine something as a plausible consequence of the science/technology we have, and if you can imagine someone plausibly making lots of money from it, then it's the right time to talk regulations.


I don't really agree, and in fact I think the EU already has a pretty good proactive law, in the form of the GDPR. You can't use indiscriminate facial recognition, gait recognition, or any other such mechanism if you need individual consent to process any data related to a person.

That said, specific bans for cases that the GDPR doesn't cover might still make sense. But we should be careful about de-fanging it by making it seem that any technology not specifically banned is fair game.


Consent isn't the only grounds for processing, though.


I think this article was written a few days before the EU memo was leaked. He doesn't mention the EU in it.


I don't really see what you're getting at? As is, you seem to be employing some convoluted reserve-psychology scheme in an attempt to square this law in with "HN reality", where all politicians are corrupt and stupid, and all laws are either evil or pointless?


How did you get that impression? Making laws that prevent abuse while permitting the benefits of facial recognition is difficult and takes time. During that time, companies using it could cause a lot of damage. So temporarily banning it altogether as a stopgap makes a lot of sense.


> The goal of the law is to prevent the development of a technical reality

Facial recognition in the brain is just specialized object recognition, therefore any research on powerful general-purpose object recognition via deep-neural-networks is improving facial recognition technology whether it means to or not. Only a law banning all artificial neural network research seems to have a chance at stopping the technical reality here.


> These efforts are well intentioned, but facial recognition bans are the wrong way to fight against modern surveillance. Focusing on one particular identification method misconstrues the nature of the surveillance society we're in the process of building. Ubiquitous mass surveillance is increasingly the norm. In countries like China, a surveillance infrastructure is being built by the government for social control. In countries like the United States, it's being built by corporations in order to influence our buying behavior, and is incidentally used by the government.

He’s right, but we should take every winning battle we can, no? Even a few notable victories could help to change public opinion about whether things are inevitable or not


“In countries like the United States, it's being built by corporations in order to influence our buying behavior, and is incidentally used by the government.”

No, he’s not right about it being “incidental”. The use of facial recognition by police is widespread and widely reported (0). Unlike other forms of digital surveillance, this one has serious and potentially deadly consequences when misused. Banning it at least temporarily is a major win for civil liberties and I would wager that anyone thinking this is a step backward is privileged enough to never have experienced police brutality.

0. https://www.nytimes.com/2020/01/18/technology/clearview-priv...


Facial recognition for Black people has been in play for decades. Generally if you’re Black you fit the description. Cameras actually are a huge net positive for this group.


I think nobody says that banning facial recognition is a step backwards. It just doesn’t solve the underlying issue and might even distract from the real problems.

Now politicians can say "but we banned facial recognition, isn’t that good enough?".


But this would be "perfect is the enemy of the good" situation. Banning things piece by piece is easier than trying to ban everything at once.

I mean, we could curtail most of the surveillance tech development overnight if we banned most forms of advertising and burned the adtech industry to the ground. But good luck pushing that through politics.


> Now politicians can say "but we banned facial recognition, isn’t that good enough?".

And we answer "No!" and continue to push them further.


> but we should take every winning battle we can, no?

It's treating the symptoms not the cause though, and this little victory will likely serve to dampen resolve to continue fighting against the root cause: increased surveillance, invasion of privacy, and the overall devaluing of privacy.

Facial recognition may be a battle that gets won, but it could be a key moment that results in the war on ubiquitous surveillance being lost.

Additionally, this is only a 5-year victory. Must we repeat it again every five years? At least the victory in the war for encryption lasted 20-odd years.


I have noticed victories tend to embolden a political groups to seek more of the same. It’s why chipping away is so often used in politics. Each individual bill does not make a big enough change to upset the silent majority, but in agrigate over time you see things like a 50% reduction in top tax rates. Passing that on day one makes headlines, but a 2% drop flys under the radar, as does the next, and the next, etc.


how do you devour a whale? One bite at a time. I don't think there is any purpose to being caught up in theoretical discussions about privacy. Privacy is won by extending the real rights of people piece by piece and sustaining that effort for a long time. There is no one time, root cause fix that holds forever in a democracy.

>Must we repeat it again every five years?

Yes we must. We live in a society where people over and over negotiate how they want to live together, which means that if we advocate for something we have make our case again and again. That is a good thing. In the Mie Prefecture, Japan the citizens tear down a shrine every 20 years because it is the only way to learn exactly how it was built.


I agree - enthusiastically - with your points, but I do have one comment:

> I don't think there is any purpose to being caught up in theoretical discussions about privacy.

There is a point! These discussions provide the context, terminology, and language for the people that understand these issues the best, to provide people like me - people whom our peers consider “conspiracy wackos” and “tinfoil” because of our deep-seated suspicion and mistrust of surveillance tech - with the tools to explain our rationale to the people we have influence over, via interpersonal relationships and whatnot.

This article, for me, crystallizes and defines many of my own personal anxieties about surveillance culture, and gives me the tools to better explain my concerns and apprehensions to my friends and family.

Just a thought.


> how do you devour a whale? One bite at a time.

Missed a step. You gotta kill it first.


Not really, it'll die eventually if you've taken enough bites. Might be a bit more difficult to take the bites if you didn't kill it beforehand, but still ...


The entire reason why we're facing down these privacy and surveillance issues is because they came one small step at a time. First it's rolled out for some incredibly small part of the population, gradually expanded to everyone else as people get used to the new norm. Rinse and repeat.

The way you win battles like this is by winning many smaller ones and pushing back what is acceptable or not one step at a time.


100%. It’s all case-by-case; moment-by-moment.

“And then they came for me”, in full effect.

Also, unrelated: but fuck yeah F-Zero. <3


I'm on both sides of the fence a bit. I agree entirely that victories are piecemeal, but the facial recognition issue just feels like the piece of the iceberg that's above the water. The rest of the iceberg, supporting things such as facial recognition, feels as if it will never be addressed, and that's the war that needs to be won.

My original reply written prior to the above: And yet nothing has been done about the information dragnet that is the Internet, even after the Cambridge Analytica situation got worldwide press. Well, the GDPR happened, actually, but that's only a reaction from a single country. And this has been going on, and escalating, for years with almost zero political will to halt or even slow it down.

Winning on Facial Recognition will be license for most governments to happily ignore the other, more insidious, creepings of privacy invasion.


The EU is not a country, it is a collection of 27 countries.


My apologies, I had incorrectly associated it with just Germany rather than the EU as a whole.


The Cambridge Analytica situation wasn't a founding cause for the GDPR, it came to light after the legislation was already finalized. It just happened to coincide (roughly) with the GDPR enforcement date, and helped illustrate why such regulation was necessary.


Additionally, this is only a 5-year victory. Must we repeat it again every five years? At least the victory in the war for encryption lasted 20-odd years.

I think the idea is to ban it in Europe for 5 years while the rest of the world carries on with it, and then reassess the impact by looking at how the rest of the world has changed. If the tech hasn't moved on and there's been very little impact, great, use it. If other countries that use it are police states where citizens can't leave their houses for fear of automated oppression then ban it permanently.


Politics isn't medicine.

Banning one technology might well make banning the broader practice easier (or harder). At the very least it creates precedent: This is a reason we ban technologies for.

The notion that we must spurn incomplete victories in order to hold our resolve for complete victory has achieved approximately nothing ever in politics. One of the most successful movements of the 20th century was the environmental movement (even if it isn't perceived that way). We now have comprehensive systems for regulating and approving chemicals that didn't exist at all until the last couple of decades of the 20th century. This wasn't achieved by saying we shouldn't stop destroying nature until we reform all of society.

Tech reformers and activists could do well to learn from previous successful movements.


We’ve had to handle Holocaust Denial since WW2, we’ve had to handle anti-vaxxers since Andrew Wakefield, and conspiracy theorists since the first apes developed complex language.

This is not a “fight every five years” episodic battle,mthis is a daily struggle, with the vested interests on one side looking to squeeze in more of the wedge until they can walk through the door unobstructed.


The cause is the desire for profit and control.

Unless you're going to build better people, you're never going to deal with the cause. All we can do is deal with the symptoms.


He’s right, but we should take every winning battle we can, no? Even a few notable victories could help to change public opinion about whether things are inevitable or not

"Banning" facial recognition is the worse possible outcome for people who are privacy focused.

At the moment it's useful to point people at and make them feel "creeped out". If it's "banned" the concern goes away.

I'm using scare quotes around "banned" because I cannot imagine any scenario where there won't be exemptions that are so big anyone will be able to use it anyway. Facial recognition is already widely used within company security systems, and within law enforcement and the idea that these systems will be banned just seems completely non-credible to me.


Well, in my EU country the police was recently stopped by our National Data Protection Commission from implementing such a system in a public street. It's not impossible.


If they ban facial recognition, they'll just work around it by recognizing every other part of you but your face.


He’s right, but we should take every winning battle we can, no?

Absolutely. Things like this serve to raise awareness that tech companies such as Google and Facebook are sleazy and exploitative. When people who used to brag at parties that they were “a googler” feel ashamed to mention it then some progress might get made.


At the same time, banning the one mechanism that inherently creeps people out might make them not worry about the problem as much, and we won't fight the more nefarious and less obvious mechanisms that come next.


I wonder if the EU has done any other notable things to fight mass surveillance, because if so it would mean they weren't focusing on one particular identification method. I mean maybe they are working on multiple fronts.

Also perfect being the enemy of the well-intentioned here.


General data retention was also banned by EU courts, I think.

While Germany is over and over trying to integrate something like that with dubious reasoning ("it will only be used against terrorists and other really bad crimes" - reality is that everybody wants his piece of it, even to use against false parking), the EU regularly puts a stop to this.

So, it's great to have this higher-level legislative to stop single countries efforts, but there could even be done more...


yes, I suppose my post was too dryly sarcastic to be obvious but really the EU has been doing quite a bit more than other nations or international organizations to stop the surveillance state and it seems a little unfair to pick on what attack of a symptom and saying but stopping this symptom won't do enough.

Not sure if this kind of problem isn't best solved by attacking the symptoms anyway, until you build up a sort of selection of anti-surveillance principles that can be used to decide new laws and regulations.

But I guess I'm pretty pro-EU so this kind of thing makes me grumpy.


Missed half your point. ;-)

Nevertheless, I think temporary bans are not a bad idea till we can come up with a better solution. Let's see what others are making out of this for the next five years, and come up with better rules. I'm totally with you on that.


Well the GDPR helped quite a bit already. Creates a proper inventive to not "just for good measure" aggregate PI


Especially if one considers how the NSA got most of their data through PRISM from corporate partners like Google. If those never collect it, the NSA can't gather it.


I see his point, but practical face recognition is a lot easier to implement and a lot harder to defend against than all the other types of recognition. You could, for example, no carry a phone or turn it off. But your face is expected to be on show (obv exceptions aside).

So no, it’s not missing the point. It’s a different point. He’s right, but action against face recognition is also worthwhile.


Schneier's point doesn't have to do with the specifics of, as he says, "identification, correlation and discrimination" methods, but rather he's arguing that focusing just on facial recognition instead of focusing on the larger more important question of surveillance in general is not useful.

If you just ban facial recognition, then you are missing the point Schneier's making.

Also, I think it might be easier to wear a mask than to not carry a phone, walk funny, wear bulky clothes over all skin, and never use a credit card. I don't know how hard it is to constantly wear a mask in public though. I'm sure there would be push back.


Varies based on country. Face masks are pretty normalised in some east asian countries after SARS. Coronavirus has resulted in affected areas being sold out of face masks entirely.

That said, Hong Kong tried to make face masks illegal during the extradition protests, which only resulted in even more protests.


I wear a mask outside in the winter, usually, but this makes people uncomfortable inside at best and obviously prevents you from eating or drinking. And you would not be comfortable yourself either. So I'm going to say wearing a mask at all times is not practical.


One winter, I canvassed when it was -40 C. So I wore a Neopreme ski mask. Which are, by the way, quite comfortable.

I did get a few jokes when people came to the door, but everyone knew why I was wearing it. Some did call me a dumbass for doing it, but whatever.

But if it'd been during the summer?


Yeah I think the discomfort would be getting too warm and also having your spit pool up too much.


Of course it's uncomfortable


Why wonder about something you already know the answer to?


"Facial recognition doesn't discriminate people. People discriminate people."

This just reminded me of a typical gun control argument (which I don't disagree with). Banning one particular technology might be still worthwhile, but the argument is valid only after the actual damage is assessed. I think we're still missing the number. (Arguably it's very hard to measure though!)


In Canada, it’s now illegal to hide your face during a protest (a gift from the Harper years)


Unfortunately this is already illegal in some other western countries for quite some time. In Germany for example since 1985 (one year late though). It's often used as legitimation when the police is using violence against otherwise peaceful protests.

In the aftermath of the G20 summit in Hamburg there were some advancements to even tighten the law while at the same time the police was using illegal face recognition software.


What about fake beards and moustaches? I've heard they're the norm in Korea. Imagine thousands of Santa Claus or Paul von Hindenburg protesting.


Illegal surveillance technology deployed warrants illegal masks in my opinion. But you should also hide eyes, ears and nose to the highest degree as possible. But even then advanced image processing will probably be able to identify you if you are already on record in some form.


Alas, maybe if everyone at the Women's March on Washington wore fake beards and moustaches instead of pink pussy hats, they would have been taken more seriously.


Yet to be tried in court, I think. I suspect, that unless someone is using a facial covering to hide their face while breaking the law, it would probably be found against the charter. One does not need to identify law abiding citizens and protesting is very protected expression.

Not a lawyer though, so I speak with little authority on the topic.


In the solidarity protests for HK in Vancouver almost everybody wears them for obvious reason. Would be interesting if Canadian police require to remove them...


In Austria it's illegal to wear a face mask if it's not cold enough or for special cultural activities that are "tradition" thanks to the last right wing - far right government.


I.e. Krampus?


Bill C-309 is the law in question [1]. It should only affect "unlawful" assemblies, but I'd be interested to see what the functional criteria are, how much warning is appropriate, and whether a protest's status can be changed retroactively.

>Harper years

The Liberals were also supporters of this bill, including the current Prime Minister.


If the images exist, you can't actually stop someone from running facial recognition algorithms on them in private, legal or not. To really stop facial recognition, we need to stop the collection of those images in the first place.


Although I would like it, unfortunately I think the CCTV horse has bolted.


> But your face is expected to be on show (obv exceptions aside)

Only in cities where pollution isn't a major concern. Any time I'm in Bangkok I'm wearing a mask most of the time that covers everything from my eyes down, and nobody bats an eyelid.

I feel like masks also got a lot more common during the last SARS outbreak, and in other countries it's normal to wear a mask when you're sick.

I think what I'm saying here is: be the change you want to see, and start wearing one[0] as much as possible in your day-to-day life outside / during your commute / whatever, to normalize it?

[0] https://duckduckgo.com/?q=N95+mask+fashion&iax=images&ia=ima...


If that is needed as a form of protest, but I would hate to see that become the norm in the EU.

It may be polite and even prudent to wear one, if you have a cold or the flue. But many people wearing masks makes society feel even more impersonal, individualistic.

May be a cultural thing to get used to, I dunno. You see them mostly in Asia and with Asian people when they visit Europe as tourists. I sometimes wonder 'Are all these people sick, or don't they trust the air in EU, or do they just feel more comfortable hiding their face that it became a habit'.


Suddenly the muslim veil makes sense. It's definitely a great way to combat face recognition. This is part of why I think a ban on wearing it in public is a bit overreaching - in public is exactly where it's needed.


> action against face recognition is also worthwhile.

there's limited resources for political action. If you focus on facial recognition, may be it succeeds. But the data brokering industry will continue on unabated because there are plenty of other sources of data other than face. And now there's less political capital left to fight the fight again.

Best to deploy the limited resources to fight the root cause - that private data and correlation is unregulated, and put limits and transparency laws around it so that no matter what source or what type of data is collected/correlated, the outcomes for citizens are still good.


Why is it one or the other? IMO, laws against facial recognition would start a national dialogue on data collection.

You have to start somewhere, so it’s better to start with this and then expand, versus trying to pass sweeping legislation, etc.


"we are already safe from privacy concerns because we've already regulated facial recognition! Why spend any more money on privacy, isn't it enough? We have more pressing issues to deal with!"


Statements like that make great TV news commentary, but they have little basis in reality. Governance rarely deals in sweeping structural changes. Even things generally though of as such, like Civil Rights, came from a long series of small reforms leading to bigger reforms.


I'm far more fearful of gait recognition than facial recognition. Gait recognition is more accurate and its much harder to fight against.


Really? Gait recognition seems a far easier thing to spoof via a limp, change in shoes...


I had that thought when I heard about gait recognition too, and I'm still curious if the general counterpoint is the same as the next thought I had, or unrelated:

It seems surprisingly hard to spoof a limp (or any other biomechanic change, like randomly swapping shoe inserts or some prosthesis to change arm movement patterns or something), consistently, for an extended period. Also, painful. Also, noticeable to humans, like most MV-dodging techniques are.


An artificially induced limp is classic fieldwork from the "good ol days" of the Cold War.

What I've read of gait recognition though is that it's not so much about identifying you, as identifying someone walking "suspiciously" in a given area.

Or tracking a person where you can't see their face, based on their gait once recorded.

However, I haven't yet been able to find any papers that describe gait recognition with a subject consciously modifying their gait.

For example, when I'm stressed I tend to walk fast, but when I catch myself doing so, I will actively slow my pace down, and change the distance of my steps - like the old adage of "smiling makes you feel happy when you're not", I find deliberately slowing down my pace, and not taking such long strides, helps me calm myself - especially when combined with a focus on breathing.

What I can't find is any discussion of how existing gait analysis algorithms handle an observed subject deliberately varying their gait.

And what papers I can find on this are largely written by Chinese academics who don't find a 94% identification rate (leaving 6% false positive or negative) to be an issue, which I guess works in a totalitarian collectivist regime where an arrest on a false positive isn't considered a violation of human rights because, lol, human rights.

Which brings me, verbosely, to the academics researching this. How do they incorporate improving the surveillance capabilities of an oppressive government into their personal mores?

Do they take the stance that those doing nothing wrong have to nothing to be afraid of?


A stone or insert in your shoe would probably work (albeit painfully). I don't think people would think you were actively dodging ML if you had a limp, unlike strange face painting or haircuts.


I've been looking into anti facial-recognition make up, and I wonder how much you'd have to apply to defeat the algorithm - like, would a line of make-up breaking up the contrast pit of an eye be sufficient?

And then there's reflectacles - glasses designed to reflect IR and visible light. I presume they'd mess with an algorithm, but could you code that algorithm to recognise them and raise a flag requiring human input?

Ah man, I love how the cyberpunk dystopian future of my youth has slowly arrived, sadly with less magic and orks, and last I checked, no shady street samurai in bars looking to hire me for my leet decker skills.

But you know, I live in CV beDazzled[1] hope.

[1]https://www.popsci.com/technology/article/2010-03/designer-a...


Do you want to consistently change it for an extended period? You'd be better off changing it for small periods of time when you don't want to be recognized, and you can stick a stone in your shoe for that.


Action against facial recognition is not worthwhile specifically because of the public attention drain it's focus will cause. The focus needs to be on the larger issue of person profile compiling and sharing by 3rd parties.


Everyone on HN is no doubt sitting there nodding along sagely but there’s also a pretty good bet that 90% of us (nerdy, apparently in-the-know tech types) have done very little to think about cutting down on smartphone use. It’s the single biggest surveillance vector (a device that you carry everywhere that knows where you are and reports your habits into a data store we know pretty much nothing about!), but we all now assume - hilariously, in my opinion - that we “couldn’t live without our phones”.

Personally I think the battle is totally lost until people start actually thinking about the negative impact of these devices, both from a surveillance POV and a wellbeing one.


You assume people haven’t already done the cost-benefit analysis. Just like they have for driving cars, salt, and windows.


Once Librem 5 is ready, this problem will be solved for many nerdy people here. It's just the lack of choice now.


What about the Librem 5 would make you less trackable?


At least the lack of built-in tracking. In addition, the (theoretical) possibility to install any anti-tracking apps available for GNU/Linux.


Looks pretty sweet until you get to the $750. I went Nokia 3310, a snip at $65 :-)


Freedom is priceless.


My fear of facial recognition is not just governments it's criminals. All they have to do is crawl my linkedin on facebook and bam they have my name next to my face.

Worse they know where I work or where I live.

This opens up a world of problems.

Say someone sees a stranger in a situation that the stranger may find embarrassing. Embarrassing enough that they could be extorted. They open their phone and find out who the person is. They can now make contact and threaten to contact your employer or spouse with images of you in this embarrassing situation.

You now have a real life avenue for this sort of thing https://en.wikipedia.org/wiki/Sextortion#Webcam_blackmail.

I know the greater problem is that we now share our details online maybe no linkedin no facebook is smarter.


We shouldn't ban any technology. However, we should ban a technology when it is used for _____. The ban should cover the use, not the technology itself. It's fine to make building a nuclear bomb illegal, but studying radioactive elements should not be.

Similarly, the technologies this article describes "identification, correlation and discrimination" should not be outright banned, but maybe they should be when their use conflicts with other important values (e.g., due process in criminal prosecution, privacy rights).

Recent targeting of facial recognition does not "miss the point", the reason it has become an issue is because it has started being used in the criminal justice system. No one really gets upset when it's just used to tag your photos.

I know developers love abstractions, but we should not try to build abstractions of technology and regulate those. To borrow from the article, identifying a person based on their "gait" is not nearly as serious an issue as facial recognition, even if abstractly they may seem similar to a developer. Instead, focus on the direct problem at hand, regulate it, and only after the passage of time when commonalities become clear, build an abstraction.


Why not? Do people trust the potential uses of that technology? If they/we don't and we all live in working democracies we have the right to ban it if we want.

I don't trust anybody having this technology, and the same goes for having samples of my DNA tagged with my name. BTW, I have no problem with my country's compulsory ID, I'm not showing it all the time and it's kind of reasonable when I have to, but having agents around me tracking what I do or don't 24/7 isn't any kind of future I want to live in. And that's what this technology has been developed for, it's not a science project. You're going to be classified and put in behavioural bins, your scores will be sold and you'll end up living in a corporate version of China. The fact that it won't be your state's agencies the ones doing it doesn't make it right.


Also see the discussion 4 days ago from the same essay published in the nytimes: https://news.ycombinator.com/item?id=22098021


I think the scariest thing is that in a world of unlimited data retention - a future powerful bad actor will have access to all our past behaviour - whether that's liking the wrong political party, or visiting a gay bar.


Schneier is missing the point.

> The whole purpose of this process is for companies — and governments — to treat individuals differently.

This isn't true. It's about population control, not treating individuals differently. He has an extremely close-minded view of the future here. The point of this isn't to treat people differently, it's to contain the overton window and basically thought-police entire populations. Nothing to do with the individual being treated differently, that is merely the lipstick-pig moment where it happens but isn't really the core problem.

> The point is that it doesn't matter which technology is used to identify people.

It sure does matter! Bugs in code, hacking the system itself is all 'implementation details' of the 'technology' used to do the tracking. Techniques that are especially susceptible to both of these make a surveillance technology dramatically more dangerous.

The details do matter, because they add up to a larger, unknown future. Schneier has a bizarrely close-minded view of the dangers of facial recognition.

I haven't heard any of the proponents of the ban say that it is the only surveillance that is happening or that should be banned, but that seems a central tenet to his point.

Some progress is better than no progress, especially in this case where things move quickly. I do not think we are at risk of a population thinking that its problems would be solved if we could just ban this one thing. It's a straw-man argument that I haven't even heard elsewhere.


how is "treating individuals differently" not an aid to population control?


> This isn't true. It's about population control

Uhh, why do you think that? So far, I only see evidence that mass surveillance is to catch criminals and for marketing.

"1984" was a work of fiction, and it seems more a warning about perpetual war, totalitarian, and the communism concept of "cult of personality" than just surveillance.


I don't see how 1984 is relevant here. The surveillance we have today far surpasses the simple ideas in that book.

> Uhh, why do you think that?

Because the people building the facial recognition pretty much say so? "Organize the world's information and make it universally accessible" does not sound like marketing opportunities or criminal-catching to me! It sounds like a desire for total corporate control over the physical and digital world.

Not to mention, it is the logical end of using this technology; "power corrupts, absolute power corrupts absolutely".

Having such a powerful and omnipotent surveillance system in place enables totalitarian control over populations. This happens through chilling effects, through terrorism, mental boxing-in of populations ceding their free will to the government. Knowing that your own face is tracked everywhere you go and sold off to anyone and owned by your own government or other governments is dehumanizing. Do this for decades or centuries and you will have a very humanized and defeated population.

> So far, I only see evidence that mass surveillance is to catch criminals and for marketing.

I don't see any evidence like you are describing. I feel like we're living in different worlds.


Marketing is population control. Getting someone to buy something is the same as getting them to believe something.


He's right to an extent, but face recognition is instant, silent, works from a distance, allows near perfect identification of a specific individual in many cases, and is particularly onerous on the individual to evade compared to other forms of tracking.

That makes it entirely reasonable to single out and ban it, whilst also thinking about and pushing for further curbs on surveillance.


Is it though? I haven‘t read about any deployed solution, which would fit „near perfect“ identification. I’d be thankful, if you could point me to some „working“ solutions.


Strange that Schneier omits the new CA privacy law, which looks to have teeth. That said, no law will be sufficient unless it's binding on the state as well as private actors.


If you accept that ubiquitous surveillance, for which this is one of several technical enablers, is technically feasible, is being implemented by multiple parties, and is basically happening, the only logical conclusion is that there is going to be way more of this stuff happening in the next years/decades. This will become normal; whether we like it or not.

IMHO trying to stop this or slow it down is an exercise in futility. It may postpone the inevitable outcome for some short time but the outcome is inevitably going to be multiple parties tracking our every move either openly or covertly.

I would like to emphasize the notion that it's not just going to be a handful of parties doing this. This stuff is rapidly becoming a commodity. Just because you are in the US or Germany (in my case) does not mean, Russia, North Korea, Iran, China, etc. are not tracking you (in addition to your 'friendly' local secret service). Assume the worst; you probably are already on file in multiple countries in some form. Also, who says it's just going to be just nation states? Several big corporations exist now that basically have a bigger valuation than the GDP of most countries.

The fact that the parties that are going to do the tracking are mutually hostile (or at least not very friendly), also represents an opportunity: they'll be watching each other. Effectively nobody is excluded from being under surveillance; including those doing the surveillance. That means anyone breaking laws has to worry about being observed doing so and has to assume that he/she is going to be found out in case something inappropriate happens. IMHO this is a good thing and effectively the only defense we'll have against this being abused and enforcing any privacy legislation.


Mutually assured destruction?


Meanwhile in China local governments even install surveillance cameras on the top of 3900 metres mountain https://twitter.com/kidrulit/status/1216197264202293253?s=21

Other than ethical problems cameras cause urban planing and aestheticcal problems.


A few years from now, cameras and vision software will be advanced enough to recognize a person just by analyzing the small differences in colors of the clothes, how s/he walks or mapping micro scratches on his/her car without any need to take pictures of either the plate or the face. I'm also pretty sure we could already build a model of a walking person or animal just by having it walk or run on a weight sensors equipped mat, so that after due training recording when someone walks in or out of a place would be doable without cameras.

The ban should be on the final purpose, that is, pervasive generic surveillance. Otherwise it keeps being a moving target in which the most powerful party is constantly one or more steps ahead.


I see that these bans regulate use by police.

But how could any of them affect use by federal agencies? Which would then share information with local police.

I mean, police lie to courts about StingRay use, or data from the NSA via SOD,[0,1] and do parallel construction.

I really do think that privacy in meatspace is hopeless.

0) https://www.reuters.com/article/us-dea-sod-idUSBRE97409R2013...

1) https://www.deamuseum.org/wp-content/uploads/2015/08/042215-...


I agree with you, and I'll add that they will simply (continue to) use companies like this Clearview AI:

https://www.nytimes.com/2020/01/18/technology/clearview-priv...


While most of these systems work well enough to identify a person, there are a number of well-known ways to defeat them. One is simply to apply newer technology to cracking algorithms used inside these devices. Improvements in processing power from one generation to the next, and a proliferation of information about where the vulnerabilities are, applies to biometrics as well as other technologies.How Secure Is Your Face? https://semiengineering.com/how-secure-is-your-face/


Some times a technology is so cheap and easy to use that banning it becomes absurd, and you just have to accept the new reality.

Face recognition isn't quite there yet, but in 5 or 10 years every kid with a phone can do this.


Technology is never inevitable and I don't put much faith in people that cannot shape their future.

If there is political will, it is pretty easy to ban public cameras for example.

The hardware is already mass produced and as cheap as it gets, but not everyone became as paranoid as Londoners.


Would you ban cell phone cameras, cameras in cars (mine came with 3 cameras) and Ring cameras?

They're all used in public.


No, I would ban cameras for surveillance purposes. This is pretty easy to legislate and the basic rule. There would be some exceptions to this naturally.


>2031: Google defends the swiveling roof-mounted scanning electron microscopes on its Street View cars, saying they 'don't reveal anything that couldn't be seen by any pedestrian scanning your house with an electron microscope.'

https://xkcd.com/1204/


Schneier's points are dead on. One of the few talking sense here. Biometric and related identification technologies are here and multiplying. We need to regulate data sharing and 3rd party data compiling, regardless of the data. Dirty data and incorrect data will haunt people, existing in who knows what databases. This is what needs to be regulated. Not the individual identification technologies. Stop the abuse of new tracking technologies before they are invented. As well as end the advertiser tracking that is out of control. (FYI: I write FR software.)


Data compiling is just too easy at this point. Anyone can spin up some instances and run webscrapers to compile profile pictures, writing samples, etc. Many data leaks are available as torrents or hosted on pay-for-access sites likely from a variety of jursidictions.

Even with some kind of global cooperation, can regulations really solve the problem? They might prevent corporations from using the data for advertising, but I don't see them as actually addressing the privacy issue.


The regulations would prevent random scraped data from being compiled into official profiles used for official purposes, such as credit checks, security scans, and anything that can legally and official be used as verified information about you. Regulations need to be in place to prevent junk profiles compiled from dirty sources and then used as official information.


What is your view on the decentralization of facial recognition?

I believe that surveillance cameras can help to decrease physical violence, but the main concern I have that the algorithm and embeddings databases are out of mine control even for validation.

Hence, are there any authorities that publicly disclose the FAR metrics? So far I came around UK case only https://bigbrotherwatch.org.uk/all-campaigns/face-off-campai...


> Finally, we need better rules about when and how it is permissible for companies to discriminate. Discrimination based on protected characteristics like race and gender is already illegal...

This is they key point of the article, but unfortunately he doesn't give any solutions or even hint at them. Does anyone know if he does elsewhere?

It also feels a bit disingenuous for him to use the word "discriminate" here.

"Discrimination" is generally understood to give someone an unfairly negative experience due to inherently irrelevant factors, such as denying someone a job due to the color of their skin.

I think most people would agree that "discrimination" is the wrong word to use when showing different people different ads due to their browsing history, or giving different people different credit card offers based on their credit scores, or charging young people more for car insurance based on their age. All of these are based on what people generally consider to be non-discriminatory, evidence-based distinctions -- and so words like "targeting" or "market segmentation" are more appropriate.

If he wants to argue for a better framework for what is considered legitimate targeting/segmentation or not, I'd love to hear it. Otherwise there's not really much to say?


Schneier's title made it seem like we should not be banning facial recognition, but in the blog itself he spent lengthy verbiage arguing there are 3 components we needs do more to fight against of a surveillance state. In other words, we need to do more to ban/regulate those rather than undo the facial recognition ban.


Interesting he mentions festivals banning it. There's a startup in LA who's trying to add facial recognition to ticket gates of large venues like stadiums for both security and marketing. I wonder how 50,000 people think about being facially recognized / tracked just to go see the ball game or concert?


proves we are in hell. Some geek ape figured out to use a rock as a tool-the rest of the apes started throwing rocks at each other and just took what the other apes made. Some geeks figure out nuclear power and a way to provide endless free energy- the rest of us turn it into a weapon. Some other geeks make a foolproof way to communicate for free-the rest of us weaponize that too. Cant wait for skynet to clean out this crap. Our brains just cannot handle complexity.


It took me awhile to grok but I think that people like Schneier want to take the ideology of online anonymity and apply it IRL.


No I don't think this is it. IRL actions are forgotten over time and they're not indexed against each other. If all your actions can be tied to your face with automated tools everything you do can be recorded, indexed and cross referenced.

Now maybe that protest you went to as a college student shows up every time you attend a job interview. That time you drank too much and were seen peeing against a building by the buildings security camera is available to someone that was recording the unsecured stream.

People from old youtube videos are now locatable as they had their face show up on facebook.

That cute girl on the bus now gets stalked home because the creepy guy that likes her can find here address.

The guy you cut off in traffic has a dash cam, he uses it to find out who you are and is waiting for you at work the next day.

The picture of you at a party on someone elses facebook now shows up to every employers search as they can now search by photo.

How do witnesses in criminal trials disappear now? They might show up in someone's publicly posted holiday photos or on a webcam feed that hasn't been locked down.


Indeed, the surveillance state that exists online vastly exceeds the surveillance state that exists offline (in some countries) right now.

Schneider's point is that we're failing to regulate against surveillance in general, and by simply focusing on banning a single form of surveillance, we're ignoring the dozens of other mechanisms that can be used both online and offline to track every single thing we do.


I don't think your taking this to its logical conclusion. Crime is increasingly more and more difficult to pull off with additional layers of tracking. If one uses systems like this to harass people that harassment can also too be tracked.


Correct. The problem with lack of privacy is because it is often unilateral. If there was actually more transparency across the board we might actually opt for less privacy.


We live in a different time "when people can be identified and their data correlated at a speed and scale previously unseen, we need new rules."

I think he's just trying to think and get others thinking about "identification, correlation and discrimination" technologies. Obviously you can't have perfect anonymity in real life, but is there something we can do to ensure that our lack of anonymity is not used improperly?


What is the ideology of online anonymity?


> What is the ideology of online anonymity?

I imagine it's something like - Biological and Social constraints have restricted human endeavors. "Online" in terms of the "internet" was an experiment to test this theory.

It both succeeded to a lesser degree than expected and failed, primarily due to social conditions of the times. Wealth disparity (motivating more bad actors than good), and outlet for those in psychological crisis (who tend to have a lot of time on their hands), juvenile behavior (gaming as a gateway) and the weakness of the legislation/prosecution for when cursory privacy invasion was enabled enough to track down bad actors (due to legislators being relative luddites).


I think there is some expectation of privacy online. No such expectation exists in a public space.


>No such expectation exists in a public space.

It sure as hell does in any practical sense. I expect that I can walk around town, and people that I don't know, don't know my name, phone number, purchase history, etc.

Let's say I want to buy a butt plug or something. I could go to the sex shop in town, and make sure nobody I knew was around before I went in, buy it and leave. If I wanted to buy a studded electro-shock butt plug, and wanted to take extra precautions that I was undiscovered, I could drive to another town and drop it off.

Now suppose I'm a government whistle blower trying to expose some really nasty government coverup or conspiracy committed by the FBI. I'd definitely take the same precaution. I could leave my phone, go several cities over, walk up to a public mail drop and drop off my envelope of documentation in the USPS box and historically, could fully expect to remain anonymous in doing so. That's still arguably safer than sending an email and hoping I've perfectly configured all my network VPN or tor settings to try to ensure there aren't any leaks.

That's NOT the case with facial recognition. Particularly when that info can be processed and categorized into a form where someone could just query an identity and get a complete historical tracking of a bunch of places you've been prior to any sort of investigative interest.


You walk in and your neighbor is working the cash register. Now he tells everyone you know.

All you’ve done is try to obtain privacy, but you can’t walk into the shop and demand that your neighbor shouldn’t be there.

The “expectation of privacy” is really a hope that you’ve marked your identity well enough, which is really the same thing you do online.


>You walk in and your neighbor is working the cash register.

This is a total fantasy.

1. If I was concerned about it, I probably would have noticed before entering that he was there.

2. If I knew him, I'd probably know where he worked and could avoid him.

3. If I didn't catch myself in time, I could walk in and buy something mundane, or nothing at all, or a gift card.

4. If I was really concerned, I'd go the next town over.

5. There's a massive difference between taking a very low probability risk of being identified multiplied by further low probabilities of it being spread notable, remembered, and spread around, and a 100% chance of being categorically identified and logged by an automated system.


I disagree. If someone followed you around taking film while you were in a public place you'd probably object (or most people would).


There are. For example, if you were to put your phone in someone’s face as if you were recording, most folks would have a problem. Now where I might agree is if you meant there is no expectation of anonymity.


Don't I have even more privacy in public space? Sure, people know what I look like, but not my name, my email, the contents of my backpack, since when I have been a member of this forum etc. . If they see me again the next day they might not even be sure that I am the same person (unlike unique usernames with history).


> No such expectation exists in a public space.

I have that expectation for most of that space.


The internet was originally (for good or bad) the wild west. You could be anyone you wanted on the internet and for a lot of people it was an escape from ordinary life that may have been suffocating.

On my Discord. I have a HomoSexual Nazi (I don't think he is a real Nazi I think he is larping because I believe works in a very PC environment), A socialist transexual woman, a half Mexican / Chinese American Immigrate who just got his first job in the USA after he got his Green Card and many other personalities. These people can be who they want and if they want to stop being that, they can just leave and it is gone forever and it is all harmless.

Flamewars, Trolling and some Drama (before the likes of actually harmful stuff like doxxing, swatting etc) were sometimes kinda fun.


Bruce is missing the point too. He does see that the problem with facial recognition is the backend database with data about individuals. The way to index this database is not the problem, and that's where facial recognition comes in.

Facial recognition used exclusively to access my hotel room? Fine! Even fingerprint. No problem. As long at that data is not linked to other databases, and is erased, at minimum when I request it. And, most importantly it needs to be protected against cross-referencing to government databases.

Because that's where the real problem lies. Cross-referencing. Is a store allowed to remember data about me? Sure. That's what store clerks do. The employees at the post office don't ask for my name anymore, they use facial recognition (the wetware kind) and then go look for the packages with my name on it. Great!

I go to a psychiatrist and he proposes that if he diagnoses me with something I can get all the visits paid back. Ok, whatcha got? Well, autism seems somewhat justifiable and is very popular at the moment. Okay. Now this data gets passed to the government in my medical file, cross-referenced to my insurance, passed to them, and now I can't renew my car insurance. There's special cover for that, more expensive, of course. Even worse: it got cross-referenced in the government itself to, and I now have to get approval from a psychiatrist to get my driver's licence renewed, every time.

Okay, so I contact the psychiatrist, and this cannot be removed from my medical file ("because then I could sue medical professionals and they wouldn't be able to defend themselves using the data they have"). Okay, fine, YOU can keep your notes on me if you must, but I want it out of my government medical file. Nope, that system just doesn't support that. We can add some additional explanation if you want, but that's all.

So I feel like the needed laws are: 1) Any medical data is off-limits for cross-referencing of any kind with no exceptions. It is also off-limits for government and cannot be used for traffic, tax, ... purposes. Even law enforcement should not be able to see this data under any circumstances. If such data is needed or important in a case, a judge can call my doctor to testify, to answer specific questions, and that's the absolute limit of government access. 2) Any data you record on me you need to specify what it will be cross-referenced with, for companies, but ESPECIALLY if you are the government. There must not be any consequences for saying "no". And when asking permissions, only explicitly enumerated named companies/departments and databases with clear listing of what that data is used for and nothing outside of that. 3) I want the ability to withdraw that permission at any time, which means ANY system that it was cross-referenced in must delete that reference 4) I want the ability to delete any data about me that was passed on AFTER THE FACT, ESPECIALLY in government databases, even if I initially didn't tell them not to pass it on. 5) I want something like Google's privacy dashboard, but for the entire government. Ideally also including companies' data. Which has buttons to delete this data that actually work.

If you follow these rules, feel free to use facial recognition, fingerprints, heartbeats, ... to index the data you do have on me. Not a problem. I can always demand you delete your data and start from scratch though.


Because the conversation is so lopsided, I feel like somebody has to ask: what opportunities are we giving away when banning face recognition tech?

Obvious one would be no more purse snatchers, ever. Almost no rapes, much fewer violent crimes. No lost kids. Lives and wallets saved.

What else? This feels like the kind of tech that creates its own broad fields of use where there were none before. It might take time or imagination to see where it goes.

And if - just saying - if the benefits are good enough to stop and think for a second, aren't there any better ways of skinning this cat than banning tech? Might be worth a brainstorm.

I'm not a big fan of EU's GDPR - I've literally just fired a client last week on an emailing consulting gig because the threat of unpredictable huge fines is too big for me. Plus I freaking hate cookie popups. But I have to admit it's also doing a lot of good - if companies can't store my IP address without my consent, I'm pretty sure they can't store my facial profile and daily movements. Which leaves the choice in the hands of the consumer, where it should be. I have Google Location enabled on my phone - it's creepy accurate, but I like it. Maybe you don't. Let's not chose once for everybody.


No street art. No protesting. Social credit. No way to break away from the stigmas assigned you your face. No speeding. No jaywalking. No pissing outside. No way to rebel.

I mean, it wouldn’t really stop crime. London has something close to total surveillance, and they have a lot of crime.

But if you really want that sort of thing, you can actually have a taste of it in parts of China.


This comment reinforces the OP's point: opponents of facial recognition seem to approach the world with a particular value system that includes a disagreement with several existing laws. Instead of working to get these laws changed, facial recognition opponents advocate bans on technology that would maken these laws easier to enforce. Why should they succeed?

> No street art

Good. Graffiti is a blight.

> No protesting

There's protesting and there's "protesting". Lawful protest will be unaffected. Unlawful protest involving criminal behavior will become much more difficult. Good outcome.

> No way to break away from the stigmas assigned you your face

In an age of background checks, your face is the least of your worries. Are you opposed to background checks?

> No speeding

Speed cameras already exist and don't rely on facial recognition to issue tickets

> No jaywalking > No pissing outside > No way to rebel

Why should people get to break the law?


> Lawful protest will be unaffected. Unlawful protest involving criminal behavior will become much more difficult. Good outcome.

What is perfectly legal today might be criminal tomorrow. Criminalising lawful protest is one of the first things a despot would do.


> Why should people get to break the law?

Because laws are made by people and as such can be flawed.


Laws can also be unmade by people. What gives random individuals the right to nullify laws made by everyone? Don't like a law? Get it repealed. In the meantime, you have to follow it.


> In the meantime, you have to follow it.

You also would like to expect that if the law is unjust, it can't manage to make too much harm while getting repealed.

And repeals can take a long time. If people would easy organize for things like repeal a law or a government which suddenly become despotic according to majority opinion, we'd live in a different world. What we're trying to make here, banning face recognition, is to make mistakes in laws less costly - sometimes way less costly.


The holocaust was legal. No one ever entered a Gulag without a sentence. The Uyghurs of China are being persecuted within the law.

Laws aren’t a model for society, or at least they shouldn’t be. Ideals and morals should, and throughout human history we’ve needed a metric fuckton of law breaking to get to the democracy we have now.


What's street art got to do with it? In the city I live in, there are areas specifically reserved by the government for pro-grade graffiti. It looks great. These areas are where people really are too, it's not like they're shoved miles out of the city centre.

No protesting? Why not? Democratic countries all have laws on the book that allow and regulate protests.

Social credit? Why does that depend on face recognition?

All these are government related. I think there's a huge problem with such anti-face recognition arguments, namely, all of them can be implemented by just interrogating the phone in your pocket. Phones are linked to contracts linked to verified government-issued identities. Phone companies can track their location very precisely, especially now with 5G. So governments have had the ability to track people en-masse for decades, and have done it, Snowden showed us the PowerPoints and ... nobody cared. Only some people in the tech industry but wider society, no. In practice people prefer safety and low crime.

That's why I've come to see all the EU's privacy actions as just posing for political reasons. Do any of these laws make NSA-like organisations illegal? No? Do they stop police using ANPR or phone triangulation? No? Then they are just for show. GDPR is an especially pernicious offender as it's so badly drafted, but the cookie law is a clearer example: it makes no difference to anyone's privacy but does make life in the EU worse on a daily basis. Banning face recognition is the kind of micro-managing that governments throughout history have proven very bad at and which are riven with unanticipated side effects (well, unanticipated by the regulators).


> Obvious one would be no more purse snatchers, ever. Almost no rapes, much fewer violent crimes. No lost kids. Lives and wallets saved.

That is unrealistically optimistic. At best purse snatchers will move to parts out of view of cameras or wear something. Like all these proposals they are only effective against non-criminals. Also, "car pulls up, kidnaps a kid" or "kid runs away from home and hides" seems entirely unrelated.


Looking into it more, it seems that reality is even worse than I thought. Surveillance not only fails to decrease crime, but actually lead to an increase (article in German):

https://www.bz-berlin.de/berlin/umland/straftaten-in-branden...

Interesting. "Crime in Brandenburg increased ONLY in video-surveiled regions".


That screams "better reporting".


Sounds plausible, but the police disagrees.


Looks like police are saying there's an unrelated reason. From google translate:

"The police explain this with fights among young immigrants who met in 2018 in the camera area. Now the situation has calmed down. There are no numbers."


There is a pretty strong surveillance in prisons. Does it mean there are no rapes?

Surveillance is a control mechanism and every time you are pictured, you have to justify yourself for you behavior in the worst case. To suggest that surveillance eliminates crime is intuitive but wrong.


How would facial recognition stop those crimes exactly? They would still happen..


Reasonable post, down voted by the "quick thinkers in the horde". The situation is too emotional to get clear discussions about FR anymore.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: