So much attention is always paid to accuracy when facial recognition comes up. Even if it was 100% accurate, it’s a technology that makes mass surveillance too efficient. Prohibitions need to also extend to the private sector, with the exception of, say, facial recognition for personal use (eg FaceID.)
>So much attention is always paid to accuracy when facial recognition comes up.
Because many people are ok with mass-survielince as long as long as it's used fairly. The problem they see with this tech isn't that it's going to create the kind of world we don't want to live in but that it unfairly targets minorities and/or the poors more than other people. They don't understand that it can never be fair because the institutions that run it can never be fair because there will always be out-groups because that's how human nature works and they will get disproportionately screwed unless we limit the ability of the majority to screw them.
Historically speaking, when the world was small towns and rural, “surveillance” was a fact of life. Everyone knew someone who knew someone else.
The big difference is that surveillance was localized and didn’t follow the person.
If someone did a crime or did something against local mores and got ostracized they could skip town and settle somewhere else to begin anew. Now, of course being the new person in a new place you were under scrutiny, but as long as you followed local practices your old peccadilloes or even crimes didn’t follow you.
> The big difference is that surveillance was localized and didn’t follow the person
That is a big difference. But the big difference is how one-sided the new surveillance is. In a small town, they knew everything about you, but you also knew everything about them.
Today, parking regulations in my town are enforced by cars that circle around scanning every license plate parked on every street. I could probably find out who has access to that data if it were a priority for me. But what's stopping a private entity from doing the same thing? I'd never know it happened. Cell carriers selling my location data? I could easily have never known about this. Ad tracking companies having a record of most sites I visit? I don't even know who they are other than a few major ones.
David Brin went down the rabbit hole of asymmetric surveillance in is novel Earth. He also explored the idea more in the non-fiction book The Transparent Society. In both he imagines a world were privacy is essentially illegal. This levels the field between mega-corps/governments and individuals: instead of the false sense of privacy we have now everyone has equal access to everything.
Even if "all the data" were accessible by all citizens, there would still be a massive asymmetry present in the government's ability to process that data into useful information.
Realistically, people would build tooling — either free, ad supported, or SaaS-based — that lets the average person synthesize publicly available data. Just making the data available allows motivated individuals / groups to do a lot of the work upfront, and then share the fruits of that work with everyone else, either for free or for a fee.
You see this already today with data from census bureau, FRED databases, COVID tracking tools, think tanks, etc.
Government and corporation. The FAANG companies clearly have enough processing power to churn an entire nation's big data streams.
Although, if one's goal were a town or even an individual city, I bet enough infrastructure to sift the data is buildable or rentable by individuals or small groups of individuals.
It becomes an interesting world when non-profits can afford big data resources.
I think that's our best option for avoiding oppression. Case in point, video of George Floyd. If we had widespread public (by the public) surveillance, I think people would feel safer. If someone is "disappeared," hopefully there'd be evidence that the ACLU could pursue.
The "extraordinary rendition" program the CIA was using to disappear terror suspects into places they could be tortured was discovered in part by airplane-spotting hobbyists. Because planes are extremely hard to hide, and there are people who watch airports to see what takes off and lands for fun.
When they pooled their data, they were the first group to notice the military had started running flights to and from locations they didn't normally fly, and it didn't take much investigative journalism after that to discover those planes were carrying people.
Not only did this locality allow people to recover from past mistakes, it also limited the power of any one actor. As soon as a town loses faith in a ruler (or other large actor) that ruler loses the ability to locate more radical dissidents in that town. That's how revolutions are possible.
With total surveillance any revolution movement can be efficiently destroyed before it gains any momentum. That removes an important check on any government's (or corporation's) power.
I think it’s a big problem because many people do crime because they are poor or don’t have a solid family unit. Once they get into the system we don’t provide resources to rehab them but basically train them to become criminals. Or when they get out even if they satisfy what society asks of them, we make it really hard to recover with a felony on their record. https://www.nytimes.com/2018/10/16/magazine/felon-attorney-c...
being able to skip town and move somewhere else is good if you've refused to fight a duel of honor or sent love letters to someone of the same sex, not so good if you've killed a few people.
True. It had defects; however, I think for serious crimes they’d attempt to catch the person (Wanted postings), but for less serious things they would ostracize them and have them pack their bags.
I have yet to encounter a single person IRL that wants to live in a society where the authorities know their exact location at all times (yeah yeah cell phones are gps trackers, whatever, you can leave them at home but you can't leave your face).
Or other private citizens. What’s stopping someone from creating a mesh of ring cameras and using photos scraped from social profiles to let individuals track one anothers‘ whereabouts in real time?
Technically? Nothing. Legally, this would be huge privacy incursion. In the EU, we have, among other things, (usually constitutional) right to privacy, anti-harassment and anti-stalking laws etc.
Also logistically, it is nearly impossible. The idea falls apart when attempting comprehensive surveillance. There is simply too much noise and too much uncertainty from the low quality, at distance, uneven illumination video. I write FR software, and large installations typically fall apart because the humans are simply too lazy, they delegate too much that should not be delegated, they are lax and frankly incompetent. Outside of developing tech orgs, FR is too complicated for under-educated persons like the police. At minimum, you need to understand probabilities better than a typical consumer.
How does cross-camera object tracking fare? Don't need to recognize someone if you can follow them until you see their face or their car or destination clearly.
I wouldn't have a huge problem with it in particular not if it includes the authorities themselves.
I used to see privacy as empowering but I've changed my opinion more and more to seeing transparency as more important, across multiple dimensions, not just policing.
Whether it's tax avoidance and financial havens, human trafficking scandals, sexual abuse victims being empowered by social media to communicate and hold people accountable, watching the police (very relevant), it seems more and more to me that privacy is mostly a tool for the individually powerful whereas transparency is a tool for collective action and impartiality.
For all I know I'm being tracked in that way by satellite already. It's only when things become operative, of someone harasses you or something ... but then they'd be easily traced and prosecuted.
Especially, even if one were to assume the current government were a perfectly democratic one who would use technical abilities only for good, who can guarantee this for a government 8 years down the line?
To be honest I had expected the GOP to come up with ... let's say substandard candidates, but Trump is on a level no one would ever have thought possible. And there are people way, way more authoritarian than him in this party.
Yeah. No more hoping for a cyberpunk future where people survive in the cracks. If surveillance tech grows, we'll get a society with nowhere to hide at all.
And shims in their shoes to hide their gait, and enclosed in plastic to hide their scent from artificial noses and to prevent shedding DNA, and they'll sneak around the sewers to hide from satellites, and carrying lead weights to hide their mass from for sensors, and foil suits to hide from mm-wave devices in doorways, and fake irises to hide the iris pattern viewed in non-visible frequencies, ...?
Hoods and glasses only work if everyone is doing it. In certain parts of my town you can tell the drug runners by their "uniform", primarily pulled down hoodies and a hand down their trousers.
==Hoods and glasses only work if everyone is doing it. In certain parts of my town you can tell the drug runners by their "uniform", primarily pulled down hoodies and a hand down their trousers.==
I'm not sure I follow your logic. Your experience seems to prove the point that pulling a hoodie down helps conceal your identity, I assume that is why the drug dealers do it.
The focus of the article is facial recognition, which is commercially available today. Your examples all require some type of initial identification to know where to check for someone's mass, scent, DNA, etc.
I agree that the government shouldn't be able to deploy this tech. We should have some expectation of freedom and pseudonymity in public. However, everyone always forgets that it is trivial to track your (almost) exact physical location at all times with our constant mobile phone use. We don't seem to mind this much at all.
I would really like to see one of those movies now where they identify the unknown guy by photo as a big time terrorist only this time it turns out he's a janitor in the local high school, then it can be one of those feel-good cop - unlikely partner action-comedy flicks, but only halfway through, the first part will be trying to catch the terrorist who keeps getting away from them by doing janitorial work at unexpected junctures.
the janitor could be played by Ryan Reynolds, when they find out he is a janitor they don't believe it "you're too attractive" then he gets upset because being too attractive for being a janitor has kept him from getting promoted all these years.
Cop played by Samuel Jackson
the terrorist played by Ryan Gosling,
"these guys don't even look alike"
response Samuel Jackson: they're both too good looking to be janitors.
Ryan Reynolds: why does everyone keep saying that!
later in the movie - Samuel Jackson - I have had it up to here with all these good looking white guys fing up the mf*ing facial recognition!
Idris Elba should be in this too. somehow.
on edit: maybe at the end Idris Elba is brought in as another person the facial recognition identified as the terrorist.
everyone is "how is this possible!"
Ryan Gosling: I don't know, he's pretty good looking, I'm kinda flattered.
Ryan Reynolds: yeah, the computer thinks I look like this guy, wow maybe I am too attractive to be a janitor!
Sounds a little Amish. I ended up in Amish country when traveling last century, and it was a little weird. But I went to a house museum, and it was interesting.
One of the things that stuck with me as different: They take a little time to look at a new technology a decide if they want to use it. I wonder if our embrace of tech as being "neutral" is correct sometimes.
"They're more cautious — more suspicious — wondering is this going to be helpful or is it going to be detrimental? Is it going to bolster our life together, as a community, or is it going to somehow tear it down?""
Tangent here, but after visiting the Pennsylvania Dutch area a few years ago, I had this mini-epiphany of a vision of an alternate universe where most of society lived in small communities like the Amish do, that were mostly self-reliant and farm-based with cottage industries, BUT in which the communities embraced technology.
Basically, with 3D printers and solar power and local grids and machine shops to fabricate for local needs, and a loose mesh network to connect these decentralized nodes/communities together (like Mastodon, instead of the paper-and-print Amish newspapers that exist). A kind of techno-libertarian-Amish blend, to form a society that was more resilient and modular, rather than hyper-centralized, dense, and easily swayed by viral influence/behavior/trends or literal biological viruses.
EDIT: Of course it would never "scale" since such a system probably could not support higher population levels at all, but maybe that's okay in the ultra-long-run or some post-apocalyptic rebuilding future (also dangerously veering into population control and genocide-ish topics).
> Basically, with 3D printers and solar power and local grids and machine shops to fabricate for local needs
None of those could exist without the significant science and R&D which is only possible as a product of people leaving their personal areas to congregate in cities and colleges where information is shared and built upon. Along with a lot of money coming from the government and big companies. And the issue with that is when people leave for that purpose, many of them tend to leave forever to stay there.
There's an activation threshold absolutely, which requires large-scale central investment to get there for the very first time. But like, if you had one Star Trek style replicator, and it was able to make more replicators, then once you've crossed that threshold, you don't necessarily need all that massive R&D infrastructure as much.
It's also kind of a difference in strategic approach, to pool resources and create cutting edge institutions, or to adopt a more decentralized approach where yes, you might not get as quick of a pace of innovation, but each community is more self-sufficient. So, you might not ever get a fancy cure for cancer or CRISPR technology in this alternate universe, but most communities might have their own local clinics and more local nurses and maybe better overall health outcomes by focusing on common treatable problems, rather than pushing for cutting edge innovation. Similarly, you might have fewer PhD's, but more high school graduates or bachelor's level of education.
Anyways, nothing about this is all that realistic or anything, just some idle world-building in my head.
"Nothing in (b)(1) shall prohibit Boston or any Boston official from:
a. using evidence relating to the investigation of a specific crime that may have been generated from a face surveillance system, so long as such evidence was not generated by or at the the request of Boston or any Boston official;"
So if a third-party, say the FBI or DEA, provides info from face surveilance systems to Boston with them specifically requesting it, they could use that.
Or a private firm that sets up to perform FR as a "public service" while collecting valuable person traffic marketing data.
Realize people, FR enables the physical world to be overlaid with web-site-like tracking ability. This is hugely valuable to business, and if they figure that out, they will push FR just like they pushed evasive tracking advertising all over the web.
Wouldn't that private firm fall under a 3rd party?
From the article: The city council unanimously voted on Wednesday to ban the use of the technology and prohibit any city official from obtaining facial surveillance by asking for it through third parties. The measure will now go to Mayor Marty Walsh with a veto-proof majority. Walsh's office said he would review the ban.
It’s not just about the percentage of false positives, it’s about how much easier it is to generate false positives.
Human face-matching accuracy is worse than software in many scenarios (“is this the man that robbed you?”) but it requires so much effort that the absolute number of false positives are low.
On the other hand, facial recognition hooked up to cctv can passively generate mountains of matches all day long for pennies.
This must just be a ban of the technology for government use, right? Can retailers still use facial recognition as part of their in-store analytics? Can Apple still sell FaceID products in Boston?
I think I fall with the HN majority in my privacy views. Yesterday however I talked to someone who said that they prefer someone is always watching, so that they can feel safer.
Interestingly they also said they don't want to know the specifics of anyone watching.
I wonder if laws like this, that in actuality seem fairly toothless, will result in more of that. "Safety, and ignorance of where/who the watchers are."
Despite good intentions and my own discomfort, I can't help but feel like the anti-surveilance movement is mostly an extension of privilege politics and virtue signaling. Police brutality is a real problem, but a wildly common observation of life in the hood/ghetto/LI-housing is the prevalence of crime. You could make neighborhoods a lot safer with a lot fewer police by using modern methods like facial recognition cameras and unmanned aerial surveillance. Break-ins and robberies suddenly become wildly easy to punish afterwards and a lot of investigative work like tracking gang members to get a sense of their operations morphs into a trivial affair. We're rapidly approaching the point where basic physical crime is optional and while there should obviously be oversight and moral considerations at every step I can't help but feel it's a bit entitled of me to live in an okay neighborhood (some crime but its mostly kids drinking in parks and hobo drama) while telling people that the risks are too great to use this kind of tech in any circumstance.
Places like Baltimore's East/West sides and South Chicago already have arial surveillance, shot spotter, street corner camera, nearly limitless police power, and CommStat operations. These things have been in place for at least a decade and don't seem to be moving the needle.Sure, they could go full PRC, but I doubt anyone has the appetite for that level of draconian surveillance.
In my experience living in one of these places, police mostly don't investigate crime even when there's clear video.
My point is that the tech is of questionable quality, the application is horrible, and people hate it all the same. Anything that actually "work" would be so oppressive as to be untenable in the American context. Also, no one who is already policed in the US trusts the institution of policing in America to not abuse their most basic rights.
You could make neighborhoods a lot safer with a lot fewer police by using modern methods like facial recognition cameras and unmanned aerial surveillance. Break-ins and robberies suddenly become wildly easy to punish afterwards and a lot of investigative work like tracking gang members to get a sense of their operations morphs into a trivial affair.
If the basis of banning facial recognition technology is its poor accuracy, will facial recognition technology be unbanned if it is 99% accurate for everyone?
Evem 99.999% is not good enough be enough because you will have at least one person that is guaranteed a false arrest and prosecution (which 90%+ prosecuted take a plea deal). There is inaccuracy with other methods as well but you have humans being held accountable. When it comes to justice,mistakes are tolerable so long as adequate compensatiom exists but when the mistake is systemic it becomes intolerable due to the preexisting guarantee of a mistake as opposed to a specific human making an error as a matter of chance. This is all exacerbated by other systemic cruelties of the US justice system where even an arrest and release for no cause means days if not weeks of imprisonment and if charged most people accept a plea bargain deal regardless of actual guilt. It's better to let actual guilty criminals get away than explicitly and systemically accept even one innocent person being punished incorrectly, because among other reasons, the justice system has legitimacy because its goal is to administer justice, accepting any amount of injustice invalidates that legitimacy and authority.
A good analogy would be a chef tolerating fecal matter in their food. Yes, there is always some small chance of that happening, but no one accepts food from s chef that explicitly treats toilet water and claims 99% of the fecal matter is gone and only one in 100 people will get sick from it.
I'm willing to come out against anyone developing a fingerprint gun that allows them to take fingerprints from everyone attending a mass gathering, too.
"There is inaccuracy with other methods as well but you have humans being held accountable."
How would the use of a technology make the person acting on the results of the information provided not accountable? There is no agency in those systems, so accountability can not be deferred to them.
"A good analogy would be a chef tolerating fecal matter in their food. Yes, there is always some small chance of that happening, but no one accepts food from s chef that explicitly treats toilet water and claims 99% of the fecal matter is gone and only one in 100 people will get sick from it. "
Actually that is exactly how the food industry operates.
We're already giving people harsher treatment because The Machine says they deserve it, with no ability for defendants to question The Machine and no one accountable for v its recommendations.
"The Machine" excuse has been used for every technology in existence, but that does not mean attributing agency to machines is acceptable. Those deploying and operating the 'machines' are accountable as no other option makes sense.
Note that I am not arguing for nor against the specific use of the facial recognition systems by city officials.
Are you talking about in theory or in reality? Who or what do you think the people deploying these technologies are accountable to, and what prevents the accountability system from accepting "I did it because the algorithm said I should' as a valid justification?
I'm pretty sure most people don't accept food from a chef that explicitly treats toilet water, although your main point about how the food industry operates is correct.
> Also there is the idea that such an infastructure would scare criminals into not doing incriminating things.
Even if the criminals were afraid of the repercussion of their actions, and were aware of the system to catch them and be convinced it will work and would lead to their capture (that's already a LOT of if), this would only work on premeditated crimes, which is a minority of the ones commited in the street.
A lot of crimes are not though in advance, but done on the moment. People fighting because they are angry. Somebody stealing because they saw an opportunity.
We hear a lot about recurring pick pockets, or bank heists. But those are not the majority.
Most bad things in the street happen without being planned, and without consideration for the consequences.
Planned crimes are white colar crimes, and they don't get caught by cameras.
I still think it makes a big difference. I spent time in China and honestly I felt safer in areas where terrorists were a concern than I do in downtown SF.
Thats what I said, better than humans is not good enough. Humans can make mistakes but they can be held accountable and their mistake is not an accepted outcome of the system. To put it differently, facial recongition would be compared with explicitly accepting only low IQ people for cop jobs (which some PDs do!) as opposed to hiring the best people and expecting them to have 100% accuracy. Their mistake is a matter of judgement and negligence. Facial recongition's mistakes are a matter of the system accepting the mistake. Technical people often focus on the pure metrics but as in engineering,why faults happen is important. The metric should never be the goal.
To put it differently, explicitly accepting injustice is incompatible with mine and I believe society's understanding of justice. This is the same reason why ML shouldn't be used to replace human judges regardless of improvements in outcome. The authority of judges,cops and legislators is legitimate because the people have a contract with them where the people's view of justice is implemented as law in return the people subject themselves to these laws. Facial recogntion and ML judges violate that contract and as a result the outcome is illegitimate,100% of convicts become wrongly convicted because as a people the views on justice of their society was nor enforced by a member of their society that is elected or appointed by the people but by a blackbox logic that only looks at the outcome as a metric. A defense attorney can claim eyewitness testimony is inaccurate for example and cross examine witnesses but that attorney cannot interrogate a ML presenting as evidence a blurry picture with a 99% accuracy match.
It seems a lot of arguments like this are based on immediate conviction. I'm totally not for that. I think your identity should be verified using other means before being arrested when possible.
You dont need any conviction, even setting aside the horrors of the US justice system, this is simply not a sacrifice I and many others are willing to make just to secure more convictions. There are manyblong standing issues of the system that must be fixed before something like this is even considered. To start, law enforcement acting with malicious and criminal intent repeatedly get away with framing suspects, falsifying evidence and brutalizing suspects. I don't need them to be tracking everyone's movements too. I think I'll pass on a genocide this morning (some of them actually say they want a race war). And thats just the very tip of the iceberg. Rule number 1 of getting out of a hole: stop digging it.
99.999% accuracy is indeed extremely bad once you multiply it with the expected number of searches.
But what you are describing is really a problem with your "justice" system. A system where most people are coerced not to use their right to trial ("accept the plea deal or your sentence will be much worse", with many innocent people taking the plea deal) is simply incapable of delivering justice.
> A good analogy would be a chef tolerating fecal matter in their food. Yes, there is always some small chance of that happening, but no one accepts food from s chef that explicitly treats toilet water and claims 99% of the fecal matter is gone and only one in 100 people will get sick from it.
How would this work for the acceptance of self driving cars?
In that analogy if you ban self driving cars you also have ban humans driving cars. After all we have plenty of proof of the bad reliability of humans driving vehicles at any speed.
No one is saying this system should flag people in cameras and send the kill drones to eliminate targets or send robocops to make arrests.
But if we have a reliable percentage that won't create too many false positives that can alert polices if a suspect is seen on a camera so that they would start the investigation, what is wrong with that?
In terms of technology, yes. In terms of civil liberties, no.
I predict a lot of cities and countries will follow suit and start banning facial recognition technology altogether. If not then we're screwed. I already avoid traveling to the UK for the insane privacy invasion of their CCTV system. I get that there are benefits for crime investigation but it's far from worth it, the whole concept feels incredibly surreal (and wrong) to me.
Edit: To further your point, even if fingerprints were 100% accurate, there's still the off chance that someone planted your fingerprints. And someone could be wearing a you-mask, so you cannot rely on fingerprints or facial recognition for waterproof evidence. Which is why proper trials require multiple sources of evidence (I hope - IANAL).
I think the invasion of privacy is a trade off. One day we will have the technology and capability for doing large scale accurate facial recognition and I would gladly take that if that means greatly reducing crimes.
The question is if we can do this without the people in charge abusing or not. If we can guarantee that it will be only being used for catching people with arrest warrant then I would have no problem with increasing the efficiency of cops
I agree to disagree with it being an acceptable trade-off.
> The question is if we can do this without the people in charge abusing or not.
I think this is the main question and goes hand in hand with whether governments should be able to decrypt the internet.
Please let me move around freely, meet the people I want to meet, without having me added to some database of people with suspect contacts. I am fine with granting "criminals" the same privileges (as I posted in a comment yesterday[0] I am definitely a criminal, given the definition of the word).
You start with cameras everywhere for facial recognition, then you add microphones... it's hard to encrypt real-life discussion without inventing a new language, which could be easily decrypted anyways.
Systematically exposing the information that you were at place X at time Y is already a huge privacy violation. You can bet it will not only be used for that (immediately dispatching police forces) but in due time it will become the norm for countless other “less harmful” uses in the interest of whoever controls the apparatus (government, lobbies, industry): collection agencies, private investigators, myriad profiling ventures, and yes, advertising. We’ve been warned.
The driving force behind this change and the protests at large is that American police are currently indistinguishable from kill drones if your skin isn’t white.
> ...you will have at least one person that is guaranteed a false arrest and prosecution...
There have already been some other comments pointing out that this is an unreasonable standard, but they aren't really thinking big enough - the court-based system of justice isn't even that accurate. "Beyond a reasonable doubt" isn't going to be a standard of evidence even reaching 99% of evidence. Out of every 100 people in jail for murder, some of them just didn't do it. 100 people is a lot.
"Evem 99.999% is not good enough be enough because you will have at least one person that is guaranteed a false arrest and prosecution "
This isn't the right characterisation.
Facial recognition at 95% can be very useful in helping to find people, initiate investigations. It doesn't have to provide grounds for arrest, let alone evidence for a trial.
Of course, there are problems even there, depending on a variety of issues.
Your license plate is being recorded in a variety of places, and for serious crimes, they will definitely use that information to 'look you up' FYI and face-matching isn't that much different.
As for arrest and prosecution, obviously that's another can of worms, but even then, there is a threshold at which we will effectively call it 'evidence'. 99.999% ID + was in the vicinity, plus no alibi, plus motive and prior record? That's a case.
If you think about it - we consider 'video evidence' to be fairly conclusive ... well I doubt humans are actually 99.999% accurate in their ability to matches faces to video! So it adds up.
" When it comes to justice, mistakes are tolerable so long as adequate compensation exists"
There doesn't need to be any compensation if there wasn't any malfeasance or incompetence in the system. To the extent our public servants are all acting above bar, using efficient, fair and lawful methods, then 'compensation' is not really the issue.
"It's better to let actual guilty criminals get away than explicitly and systemically accept even one innocent person being punished incorrectly,"
"accepting any amount of injustice invalidates that legitimacy and authority."
This is not true - by your very own logic. First - a justice system that 'lets people off who are 99.999% chance of being guilty' - implies quite a lot of injustice. Moreover, that in some instances, innocents will go to jail is tragic but absolutely inevitable. Not in 100 years will we have fixed this problem.
Along with some of the racial injustice you mention, you know what is 'unjust'? The amount of crime, particularly murder that goes unpunished in the US due to low case clearances, particularly in violent communities. That means aggressive killers on the block go free, gloat in their crimes, and do it again. Can you imagine having your sibling or child killed, and essentially knowing 'who did it' - but having them go free? This is common.
Facial recognition is not quite ready for prime-time, and it's invasive in ways that license plates are not, so it makes sense to ban it for the time being - however, there are thresholds at which it would make sense. Facial recognition in airports, near high value targets and such systems for alerting police to 'high value, dangerous targets' might make sense. But for everything else it's just not needed.
I will try to read and reply to your whole comment later but keep in mind that we live in a democracy not a technocracy. As a people, we did not give consent to this. In my city for example we got rid of all traffic cams as a matter of a referendum. No one cared if it was more accurate than actual cops. We dont want it, we dont want to be recorded and tracked. We are not willing to give up liberties regardless of the benefits of the technology. It can find a missing person but a person hiding from an abusive spouse or people trying to coordinate a movent against the government (see hongkong and china's abuse of facial recongnition) can also be tracked and subverted. This is a weapon pure and simple and it is up to the wielder to use it for good or evil. As a people, we do not trust the government or the ruling class to stop at the good uses of this technology. This is why technocracy is evil, it only looks at outcomes,disregarding the will and consent of the governed.
>We are not willing to give up liberties regardless of the benefits of the technology.
Finally town with smart people. I wish I could live there.
> As a people, we do not trust the government or the ruling class to stop at the good uses of this technology. This is why technocracy is evil, it only looks at outcomes, disregarding the will and consent of the governed.
That's the way to go.
Face recognition is even worse than Traffic cams. I don't even think it's the matter of referendum. It should be considered a human right. I do not wish to see some majority could strip anyone from basic human rights.
First - you've misused the term 'technocracy' is. A 'technocracy' is not government based on technology - it's a government run by 'elite experts'.
Second - "We are not willing to give up liberties regardless of the benefits of the technology."
This makes no sense. It's like a line from a Rambo film. Communities trade liberties for benefits all the time.
Suppose police could clear 100% of murder crimes if we all consented to having our fingerprints taken. We we then do that? Probably. So long as there were no other ill effects.
This arbitrary notion of 'freedom' completely abnegates the fact that people live in communities with other people. Everything is a tradeoff. Absolutely everything.
Go ahead and use your chainsaw at 1 am in the city and see how 'free' you are to do such a simple thing as trim a hedge when you please. Or do it naked in the front yard.
Second - both these comments have shades of fear of the unknown.
This is the same kind of uproar people had about 'finger prints', 'DNA' and possibly 'seat belts'.
Both DNA and fingerprinting have been used expansively and widely for law enforcement, we generally accept their use, and are better to understand the parameters of them.
Using AI on a case by case basis in fact - is considerably less invasive.
Your fingerprint and DNA are semi-private bits of information.
You face is de-facto public.
Using 'AI' is no more 'technological' than any thing else, frankly, the term 'AI' is misleading because there is nothing special about it.
We could very well frame it as 'database search'.
---> The police took video footage from the crime scene and identified three possible suspects to the database one of the assailants was found to have the gun used to commit the crime.
It's ridiculous to assert there's something wrong here, let alone some kind of fundamentally new way of policing.
As for 'arbitrary surveillance' - that's another thing entirely. We don't have camera speed traps in a lot of places, that's fine, that's one application of technology.
I suggest we don't need broad surveillance either, but that has little to do with AI.
Crime is a real thing, cases unsolved and criminals unpunished definitely represent 'injustice' and so we need to use the tools we have to make the world a better place.
Let's take this simple one as example of stupid laws that suppose "to make world a better place "
Can you justify rationally and logically why naked person in the front yard is a problem?
I mean, we have born this way aren't we?
And yet having such biases you advocating for technology amplifier to enforce laws??
And you think you are making the world a better place ?
If you ready to prosecute someone for being naked,
which means actually simply being himself and not
sharing your views about nakedness, I prefer
that you would be prohibited to have access from any surveillance completely.
>We could very well frame it as 'database search'.
And in that case I do not wish you to have any access to any database what so ever , and the 'data base' itself should be destroyed. Because 'AI' is a different level of magnitude in surveillance, because one thing is when you go some people notice you while some don't and another when something watching your every step. This by itself is intimidating. People would not feel free and relaxed they would fake it or smash the cameras at some point.
We already have too much surveillance to the level it's annoying to go outside. I do not feel comfortable any more. I do not like somebody watching me with AI precision. I was questioned by police just for taking the walk not in a 'proper' hours and all I wanted is just to take the air and think about deep topics without interruption.
The world is already going crazy with too much control and some people seems do not realise that at all.
>Facial recognition is not quite ready for prime-time
Will it ever be? Should it ever be? You have no perfect laws or rules and never will to begin with, so the idea of following rules with ideal tech-supported precision is wrong in it's core. Pursuing laws perfectly would kill any freedom and it's a perfect way to a totalitarian state. Searches were limited for a reason and thus tracking should be prohibited too, even if you have tech ability.
Mr. Turing himself was a victim of idiotic laws, I just wish he could get away with his "crime" of making love. If you think today it's different, it's not.
Too much laws and you have zero progress or evolution because progress comes from reviewing known and accepted concepts. You can look in history how many idiotic laws were in place, add to this tech precision of your dreams and the free world is finished. Would you like that? I don't, for sure.
"You have no perfect laws or rules and never will to begin with, so the idea of following rules with ideal tech-supported precision is wrong in it's core. "
This makes absolutely no sense.
We already use fingerprinting, DNA, photographic evidence - and in some cases worst of all '1st hand witnesses' as evidence.
Pursuing techniques and possibly technologies that more effectively help us create just out comes is exactly what we want to do.
Your example of 'Turing' is another thing entirely: what is deemed legal and not-legal is a completely separate thing from what we construe as evidence and not evidence.
"Too much laws and you have zero progress or evolution because progress comes from reviewing known and accepted concepts"
What does 'too much laws' mean, and what does it have to do with anything?
>What does 'too much laws' mean, and what does it have to do with anything?
It means, that you have laws that you should not have at all.
It means that you never can make absolutely wise perfect laws.
And it has to do EVERYTHING with this topic.
If you use perfect means to follow imperfect laws you would end up with stupidity amplified.
If you can't grasp this concept, look at the example of 'Turing' it is exactly about this pure dumb stupidity amplified.
We have now all kinds of idiotic laws and if to insist on them with too much of precision using technical amplifier their stupidity will be revealed too vividly and this would lead to a strong opposition and let us hope it would be not armed opposition, but I would not bet on it.
> Facial recognition at 95% can be very useful in helping to find people, initiate investigations. It doesn't have to provide grounds for arrest, let alone evidence for a trial.
Even getting asked to come down to the station to give a statement based on an incorrect facial recognition is not something I would want to deal with for the sake of "the greater good".
95% seems awfully low accuracy, and even without an arrest this will cause a lot of headaches for many honest citizens.
> No - there doesn't need to be any compensation if there wasn't any malfeasance or incompetence in the system. To the extent our public servants are all acting above bar, using efficient, fair and lawful methods, then 'compensation' is not really the issue.
Sounds like a utopia, definitely not the world we live in.
I hope not. I am ok with it as long as we don't plaster the whole public space with video cameras. That would be more than unfortunate. I doubt most places would gain anything from it.
I am not keen of my state knowing my whereabouts either and think camera deployment would create more problems than it solves. Countries where it has been deployed don't have impressive advantages to show for and all the privacy disadvantages. So why should we even consider it? What problem should it solve?
My opinion, if you are still scared of terrorists a psychiatrist might be more useful than a camera.
Facial recognition isn't that different from fingerprints or DNA traces. It's a comparison against a database of biometric data.
It used to be that law enforcement relied on fingerprints, eye-witnesses or a video tape backed by a judicial and legal system that - more or less - severely limited how and when that information could be collected and used e.g. within the context of discrete on-going criminal investigations.
This balance has been shifting entirely over the few short years.
Now, video camera's, fingerprinting and so on are collected pre-emptively; on literally everyone. Undeniably, it's easier to move forward on an investigation if you already have a lot of the data at hand. The argument put forward by proponents is that speeding up investigations saves lives. But that comes with a ton of hard problems.
First, searching accurately through a vast mountain of diffuse information is a hard problem. Law enforcement has outsourced that part to private companies, which opens up a can of worms in terms of confidentiality and privacy.
Second, searching through such a database becomes a black box as far as law enforcement is concerned: just upload a picture or a video fragment and it will easily yield any result. The fallacy being that convenience lulls LE into assuming that the results are accurate enough at face value. No need for verification or critical thinking (Much like you'd accept the results yielded by Google Search at face value).
Third, it's a convenient way to offshore responsibility towards the public. Law enforcement didn't make the wrong assertion if the wrong person ends up getting prosecuted and convicted: the data was just "not good enough". The fallacy here is that data gets treated as a commodity, which it is anything but.
Fourth, there's a difference between what's morally right and what's legal. The latter changes depending on how defines public governance. A database may be a great idea, until control is ceded to someone who uses that data against the public.
There's actually a historic precedent here: the 1943 bombing of the Amsterdam Civil Registry office. Following the 1940 German invasion of the Netherlands, all Dutch Jews had to carry a mandatory identity card and there whereabouts were recorded centrally at the Civil Registry. This was easy as, prior to the war, people's religious denomination had been recorded. The Dutch Resistance understood the importance and the downsides of a centralised record containing private information falling into the wrong hands. And so, they ended up attacking and demolishing the Civil Registry office containing that record. Albeit half-successful.
Obviously, I don't advocate attacking datacenters - that would be an effort in futility anyway - but your assertion that investing in mental health support would be more helpful rings very much true.
Fingerprints are probably a good comparison, the factors used AIUI mean about 1:1M chance of a random match (that's from a few years back, maybe they changed how they gather/process/match fingerprints).
That's not great, but people perceive it as "fingerprints are unique".
If you assume the best then poor accuracy is just the easiest justification not necessarily the only, the best, or the most important.
I’ll take it.
There are narrow use cases where i think facial recognition in law enforcement would be a good thing, but it is ultimately too much power and too easily abused to trust legal systems to adhere to right usage. Banning is the correct action when misuse is as bad and correct use as complex as they are.
99% accurate would still lead to a huge number of false positives because most people aren't criminals. The base rate of non-criminals is so much larger than the number of criminals that there would be hundreds of thousands or millions of false positives, depending on how widely facial recognition was deployed. There would be many times more false positives than accurate identifications.
It's a tool, not the only tool. If you have a population of 100M, even if 1M match, then you add geographic area, skin colour, hair colour/style, height, known criminality, ... suddenly you're down to enough people you can investigate.
Sure, if your police/justice system is not competent you have a problem, but you had that in the first place.
The city council unanimously voted on Wednesday to ban
the use of the technology and prohibit any city
official from obtaining facial surveillance
by asking for it through third parties.
My my, the city council of Boston has a brain. When San Francisco banned FR they left the door open for private contractors, which the city just hired straight away.
Let me start off by saying that I think we need to be careful with this type of tech. Assuming it never makes mistakes can be deadly.
As someone who has done some work with building deep learning models what is it that makes this unfairly target minorities?
Is it that the people who trained the model did not present enough example images of minorities during training? Is it because darker (presumably black) skin does not show up as well on poor quality videos (presumably because the metering of the camera exposed for the surrounding background which was bright)? Or is it the law enforcement using it was poorly trained and assumed the computer was infallible combined with possible prejudice they already had against minorities?
The first problem I would think could be easily solved. The second problem I would think would be rather difficult. The last would require extensive training but I am sure we humans would screw that up also.
I think you’re starting from the wrong place in your analysis. The first question we should be asking is why we would want this technology at all. The potential for bias is a moot point if we as a society decide that we don’t want this kind of government surveillance.
Even if the systems were perfectly fair and not the least bit biased and were operated by a perfect utopian police force I still wouldn’t want facial recognition. I’ve yet to hear a potential benefit of this sort of software that would justify the huge cost to citizen privacy.
Just because we can train computers to recognize faces doesn’t mean that we should.
>I’ve yet to hear a potential benefit of this sort of software that would justify the huge cost to citizen privacy. //
It makes it easy to find suspects and narrow down suspect lists. Meaning far fewer police are needed to catch a greater proportion of known criminals.
Most people consider that a huge benefit.
Let's say you have a db of all faces in country of 60M people. You have a photo/video of a person committing a crime, robbery. False positive rate is 1:100,000. Your search returns 600 people; address match finds 60 with connections to the locality; 5 of those have records, one for robbery. You'd at least sit a person down for an hour to review the matches, consider the records, list people for interview.
According to UK ONS stats, those adults released from prison, in Jan-Mar 2018, had a reoffending rate of 65%.
It seems just tracking known offenders would find the perpetrator in many cases if visual recognition is possible.
Yeah, I fully agree. While I do not want government to have this surveillance I was curious about the problem with the tech itself. As in why was it being biased.
For those unfamiliar with Boston governance, "Boston" here means the "City of Boston", population 0.7 Mppl. A Seattle or El Paso. Rather than a "Greater Boston" aggregation of municipalities of 2 to 8 Mppl.
For a NYC analogy, imagine its historical consolidation was more limited, and many of its towns and cities remain independent, never having consolidated into boroughs, and the boroughs into one big city. Flushing, Brooklyn Heights, Kingsbridge, are still independent towns. Here, the city council of a city occupying only lower Manhattan, but confusingly named "City of New York", just voted on face recognition.
Interesting. So the city's tying its own hands. I assume private companies can still use their own resources to do their own individual identification, though.
The stated complaint against the article is accuracy. It also says Boston PD doesn’t use it yet - this is preemptive. I wonder if they are aware that tests conducted by the ACLU and the like didn’t use the recommended configurations for precision. Not to mention that false positives don’t matter as long as there’s a human in the loop to validate the match, because then it is no worse than the minor risk of a false match we accept even without facial recognition.
False positive matter. Being arrested for something you didn't do is horrific and life changing. The human in the loop is human. We unconsciouly tend to trust machines as objective and accurate. This is not as simple as you make it to be.
We already trust human police to make matches using their eyes to apprehend suspects. This is fundamentally necessary to enforce laws and ensure a safe society. Since a human match is required with or without facial recognition, a false positive from an algorithm doesn’t make the problem any worse.
Yeah but here’s the problem, you massively increased one of the humans capabilities but none of the others. Like judgement or even perception.
With their superhuman search capability one person with a bias can discriminate against everyone in the database instead of just whoever is standing in front of them.
>I wonder if they are aware that tests conducted by the ACLU and the like didn’t use the recommended configurations for precision
As I mentioned in other similar threads I also wonder if police is using the recommended configs and not the default ones, because it can't help if somewhere in the cos it shows with small letters that "we recommend for police to use X but everyone else can use Y so we default to Y so if you are the police we recommend you to change Y to X".
I would be pretty angry about a false suspicion. It is a constant additional life risk without any practical advantages. Sure, maybe I get to be victim of some crime and then be glad a video exists. Be that as it may, it isn't worth it to me.
Is the American justice system so broken and law enforcement so incompetent that a computer flagging someone is basically an automatic conviction and the technology has to be banned?
No evidence, by its self should be enough to get a conviction. Not your finger prints all over the scene of a crime, not a video of someone who looks like you, not even your DNA matching a rape kit.
The fact that someone was arrested because of match shows a failure in basic criminal investigation more than anything else.
It's because it can track the where abouts of everyone. It's a massive privacy issue. Do you want the gov't to know where you go, what you spend your money, and who you associate with? I sure as hell don't even if they wouldn't find problem with it.
It really doesn't matter what the legislation says - no doubt they have already been doing and been calling it something else. It's going to be one of the worst things to come out of the 21st century.
>Do you want the gov't to know where you go, what you spend your money, and who you associate with?
For better or worse (and better only being in the sense that things need to get worse before they get better) there are a great many in Massachusetts do believe all-seeing government surveillance (though if you phrased it like that they'd probably take issue) is a net plus to society.
Yes draconian surveillance to keep the status quo. Your choices scrutinized by some faceless figure to keep things as they're 'suppose' to be(or worst yet to carry out an agenda).
It's because it can track the whereabouts of everyone. It's a massive privacy issue. Do you want the gov't to know where you go, what you spend your money, and who you associate with?
The government already knows that, via Google, Facebook, and the telcos.
Yes, it is that broken. First, simply being charged means you're automatically out $5k for a criminal defense attorney and missed employment. Second, the criminal justice system appears to have little respect for the fallibility of technology, especially when its results are highly correlated with other evidence (eg a police lineup).
Ok learned something. Well, unless there is financial redress for wrongful arrest then that’s a serious general problem. It’s not the problem suggested by OP but a serious one.
>During Wednesday's meeting and before the vote, Wu said that Boston shouldn't be using racially discriminatory technology. She noted the reports of the first known case of a man arrested after being misidentified by facial recognition technology in Michigan.
Are you presenting that quote in support of some particular point I made or are you just being generally helpful?
Yes, so a wrongful arrest was made, and thus the technology isn’t perfect, so we shouldn’t use it is the thinking?
Well actually no, apparently wrongful arrests aren’t sufficient cause to ban a technology but only one believed to racially discriminate lol.
Or maybe the real reason is because it’s become fashionable of late to fuck the police and the whole because reasons is an afterthought.
In any case, the larger point, the one suggested by the OP, the one met with foolish acquiescence in response, the one suggesting that our justice system convicts people on the basis of facial recognition technology is wrong..
Your tone is completely inappropriate. You're making a lot of assumptions about what people are thinking and what position they are taking in the debate. I can discuss the topic with you when you are feeling better.
There's at least one prominent case of someone being arrested solely on a face-recognition match https://news.ycombinator.com/item?id=23628394 and there are at least a few cases where someone was convicted due to mistaken identity, let me just pick one https://www.nbcnews.com/news/us-news/kansas-man-who-blamed-w... So I wouldn't call it "automatic" but since there is a false-positive rate in convictions, it's important to try to reduce false positives at earlier steps as well.
Do you think the news is designed to provide an accurate representation of reality? I don't necessarily disagree with you, but pointing to the news isn't exactly convincing.
Maybe that's his point? That he assumes this is unusuable unless he has extreme indication otherwise.
Also, he said this technology, not any technology.
Seems dumb, facial recognition has a lot of potential. I love entering UK with facial recognition and passport to go straight through the border, no lines no talking.