Hacker News new | past | comments | ask | show | jobs | submit login
Facial recognition CEO: software is not ready for use by law enforcement (techcrunch.com)
126 points by cpeterso on June 26, 2018 | hide | past | favorite | 96 comments



As in many things, a blanket statement of 'not ready for use' isn't really the most helpful way to look at the situation. There's a range of ways such technology could be usefully deployed, even with current technological problems, that might aid law enforcement officers in the field. It's a very wide continuum from zero to "automatically locking people away for life based on a facial match".

As the article says, there are huge concerns about how such tools might be misused, whether legally or not. So the title might more accurately stated not as 'facial recognition software is not ready for law enforcement', but that 'law enforcement is not ready for facial recognition software'. I'm fairly sure there are countries out there where citizens do not lie awake at night in fear of their government, and where such technologies might be responsibly deployed as a tool in law enforcement.


>> "I'm fairly sure there are countries out there where citizens do not lie awake at night in fear of their government, and where such technologies might be responsibly deployed as a tool in law enforcement."

This was true of many countries where it's no longer the case. Once a technology is available to a benevolent government, it remains available once that benevolence ends. It's wise to do what can be done to prevent and hinder misuse before it's too late.


Regardless of how you try to rephrase it, the fact remains the same: the technology doesn't work well enough.


This is going to sound horrible, but I suspect at least part of the reason people are unwilling to state that "facial recognition tech currently doesn't work well enough", is because there is a lot of money riding on it. Selling to law enforcement was probably a very large part of the plan for that sector. So whether it works or not, they kind of have to get law enforcement to use it.


It doesn’t sound horrible, but it is horrible.


Why does that sound horrible? That's how the world works.

I suppose it is horrible, but it's a very common situation.


...and law enforcement has shown it can't act responsibly with technology, especially when it comes to privacy.


You're right, but if "acting responsibly" was at all a prerequisite, they wouldn't even have their precious guns.


Or their cars.


Well enough for what?

You don't think there's _any_ conceivable way that law enforcement could utilize facial recognition tech without negative consequences? Not even, say, aiding manual review of security camera footage as part of an investigation into a preexisting suspect?


>You don't think there's _any_ conceivable way that law enforcement could utilize facial recognition tech without negative consequences?

Absolutely. Deploy the cameras around town and train it using pictures of their officers to catch misbehavior.


What do we do with all the data that includes everyone else’s faces?


If there is no potential officer misbehavior in the data, delete it after 60 minutes of recording.


"Regardless of how you try to rephrase it, the fact remains the same: the technology doesn't work well enough."

I don't suggest that anyone is looking to simply lock people up because of a 'face match' - rather flag an individual as 'possibly being someone' in which case this can be very useful information.

If for example they use it at border checks to 'flag' individuals, then a border agent can manually intervene, check the person's info etc..

So maybe that this not even possible to any degree of validity today, in which case 'the tech is not ready' ... but if the tech works to any measurable degree of reliability then it's possible to be put to use to reasonable effect - like anything (guns, tasers, DNA matching, fingerprints) - it has to be used properly.

Fingerprints seem to be a bonafide way to identify someone and can be used as evidence, it'd seem that facial recognition can't be used as evidence, but likely for other things.


If we could force the cops to always use the tool in a way that is consistent with its limitations, then you might be right.

Unfortunately, cops are not technologists. They're not mathematicians. They don't understand the dangers of applying a solution that is right 99% of the time to a population of 300 million people, and if each person averaged appearing on just one camera per day, that would mean three million false identifications per day.

How many real criminals are they really looking for on a daily basis, and which would warrant such a dragnet measure?

They can't handle a false positive rate like that. They can't be running around like chickens with their heads cut off, trying to verify the millions of bogus hits against the tiny fraction of potential valid ones.

It's the same problem that the TSA has, only these are real police with real guns. And real people are going to get killed because they get mis-identified, as opposed to just inconvenienced and pulled aside for extra scanning.

Do you really want to make the cops as bad as the TSA?


You don't have to 'force' cops if the proper parameters and processes are laid out.

Moreover, though cops can bend and break rules, they're not idiots.

For example, at border crossing, a 'fact match' with a name simply could flag someone for an interview, and a background check to see if it's a certain person.

An arrest has to be made on some kind of reasonable grounds if a 'face match' is not considered reasonable grounds, border guards (and cops for that matter) are quite aware of that and know they can't make the arrest on that basis alone.


Except ICE doesn't abide by your rules. You have no rights when being questioned or investigated by them.

At least, that's the way they operate.

Now, if you take them to court, maybe you can get a reversal of their action against you. But you have to be a citizen or legal resident to do that, and you still have to suffer the consequences in the meanwhile.


Yes of course I agree and we should be wary; it's bonkers that the US does not have clear and definitive rules for how the rules/constitution applies to non-residents etc..


Doesn’t some kind of facial recognition work well enough for Vegas casinos?

Instead of police being able to scan one face at a time as they walk their beats looking for suspects, an automated system could scan everyone in view and then alert that an suspect has been identified with xx% accuracy.

Every week on next door someone posts a video still of a vagrant or thief etc. either from a broken car window, hit and run, lurker, etc. Automation would help if the perps. We, as a society, might want to adjust the dials on punishment, given the efficiency, but we shouldn’t give up the chance to minimize these crimes when reasonable.


It is a bit more complex. There are few scientific studies on this topic. See this one: http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a...

"We find that these datasets are overwhelmingly composed of lighter-skinned subjects (79.6% for IJB-A and 86.2% for Adience) and introduce a new facial analysis dataset which is balanced by gender and skin type. We evaluate 3 commercial gender classification systems using our dataset and show that darker-skinned females are the most misclassified group (with error rates of up to 34.7%). The maximum error rate for lighter-skinned males is 0.8%.


That’s pretty interesting. Given those stats, why can’t it be selectively deployed on light skinned perps till the system is trained better on other skin color classifications in order to achieve similar accuracies? And in the mean time catch some baddies.


Interesting, so given your implementation only targeting light skinned "suspects" would using this be racist?


I don’t think so. It would target more skin tones as it became better at identifying suspects accurately. The aim is to reduce crime regardless who commits it. We have a system which can help on a subset but not another. Why should we ignore the potential only because we cannot deploy it against everyone?

It’s like saying, well, since we can’t successfully prosecute big bankers, well also ignore smaller fry bankers who are sloppier cuz it’s not fair to them.

In the end everyone but the criminals benefit. Eventually one hopes the system is well enough trained so it can be deployed for all skin tones.


You're incredibly optimistic about how this will be used.

I have zero doubt that the second such a thing is deployed, they are going to be cataloging everyone's faces permanently then moving to track people next.


Given the lack of restraint common in policing (at least in the US), I don't think we want to start giving police probabilistic estimates that will direct them to categorize people as criminals. Even to look at them more closely, given that fatal SWAT raids are caused by prank phone calls. Imagine the impact of "well, the system told me he was 94% likely to be a criminal" turning into another excuse for these awful raids.

The false positive rate on fingerprints and the lack of understanding in DNA statistics further indicate that the enforcement/legal system has a very poor understanding of even basic statistics.


We also rely on witnesses with dubious memories and motives but people will never be held to the same standards as machines.


I dunno. We already had the Orwellian scare when security cameras started getting ubiquitous. We keep kicking that fear can down the road, but we don't have to. We've already seen what security cameras are really good for, watching the watchers.

It doesn't make sense to surveil regular people, but it makes a heck of a lot of sense to surveil public officials while they're doing their jobs. We went from "the police are going to sit back and watch cameras of everybody and show up at our doors to arrest us!" to "Damn cop department won't release his body cam footage because they claim it was out of battery power!"

I think facial recognition is going to follow that same sort of acceptance arc. Nobody except China is going to have the stones to build a bureaucracy around deploying it and we're going to want it to solve a way more pernicious problem once we find a killer app.

If you're really worried about government malfeasance, all you have to do is to actually read some of the Wikileaks dumps and go looking for smoking guns. Everybody else already did that and other than the NSA programs, nobody's found anything really interesting.


Unless you were specifically talking about facial recognition-related items, in which case, Wikileaks not needed, they'll just tell you:

Homeland Security’s New Database to Include Faces, DNA, and Relationships - https://www.eff.org/deeplinks/2018/06/hart-homeland-security...

http://www.theverge.com/2017/4/18/15332742/us-border-biometr...

https://www.theguardian.com/technology/2017/mar/27/us-facial...

http://jacksonville.com/public-safety/2016-11-11/how-accused...

Bonus, from the fascism side (the real definition, government and corporate cooperation): http://massprivatei.blogspot.com/2016/07/10k-google-wi-fi-ki...

And from the speculative side: https://www.bloomberg.com/features/2016-baltimore-secret-sur...

Just saw an article I can't find any longer about China using 10-sensor cameras to do facial recognition at a city-level. Built by a university in the US....things do not look as bright as you're painting them, I'm afraid.


America has a long history of facilitating other nations' experiments in tyranny. Like, a really really long one. I learned the other day that many of the atrocities committed in the Belgian Free Congo were actually committed by American and British companies, and the details were covered up after it became public.

In order to get me interested in a new Threat To American Democracy and Freedom, you need to show that it's actually novel and unprecedented. And then it's only as interesting as its potential to actually scale. Remember red-light cameras? In Georgia they had to take them out because after the public bitched, they had to increase the length of the yellow light and the company running them couldn't make ends meet.

Without economic incentives, the political will to keep it going, and a novel approach, it's really hard to get interested in outrage or fear over technological dystopia.


Technology like storage and cameras is cheap and becoming cheaper. What's lacking is legislation which always lags behind technology, typically decades.

>America has a long history of facilitating other nations' experiments in tyranny. Like, a really really long one. Considering how long America has been around say compared to Europe that's quite a claim.


I saw a deck of cards featuring 'friendly' dictators, and I have a feeling it is incomplete.

I think this might have been it:

https://www.amazon.com/Friendly-Dictators-Americas-Embarrass...


I agree with you for now. However, these things are going to advance and improve. I think the biggest Orwellian issue is the targeting of dissonance. The government can't watch everyone and process the data but they can target those who disagree with policy X or politician Y or company Z.


Algorithmic processing of keywords, sentiment, and then, metadata, to suss out those who speak truth (treason) in an empire of lies.


There's nothing new about government targeting of dissidents. The worst case scenario there played out in the fifties with McCarthy. Society survived.


"Society survived" is a pretty low bar to set. Why not aim for "innocent people don't face negative consequences for harmless acts"?


The bar is getting ordinary people to be concerned about the possibility for America to turn into dystopia. While yours is a laudable goal, it's well within the ability of our existing political-economic system to ratchet closer to over time.


> nobody's found anything really interesting.

You're kidding, right?

This is biased in selection, but I would call Wikileaks releases anything other than uninteresting. http://www.mostdamagingwikileaks.com


If you have to use selection bias just to conjure up interest, I'd say there's no fire amongst all that smoke. I read through 7 of those and while some of them were mildly interesting when they hit the news cycle, they're certainly not now. Grist for America's identity politics rumor mill, but certainly not remarkable in any other sense.


A financial institution hand-choosing Presidential administration employees is underwhelming to you.

Okay.


I skimmed through the list to find it but I couldn't find anything like you were describing. What I did find, was "Spirit Cooking". That's not selection bias, that's out and out making shit up.


It's not. Keep reading.


We all know the systems are biased, because we feed them biased information - Cathy ONeil did a great job on this with "Weapons of Math Destruction" What puzzles me is why he doesn't just provide the counterexample to scare the hell out of everyone - if he inverts the racial distributions in the training datasets, then you'd have a system where the shoe was on the other foot, and both argument and fuel for removing bias from the training sets - the recognition systems only know what they know, and they don't really understand that they might not know, they only know the confidence they have in knowing - hence this is my favorite happy place example of the problem --https://youtu.be/UFVB5rnqjyY


I agree in general with the spirit of the article, at least as I understood it: facial recognition can empower oppression and lead to even more doubtful convictions, for example it could erroneously supply additional rationale to suspect or convict a person, reinforce biases, etc.

However, I do not get the "companies unite against selling X to government" approach. It is, to me, both misguided (restricting gov't from buying something private companies can buy will simply lead to relabeling or minor redesigns of the same technology) and naive (to work, it needs a very broad agreement which IMO will not get enough traction).

An approach that could work better is to inform, not restrict: make sure that all imagery and videos used or considered for use police are public unless there is a short term, limited exception. And make police wear cameras whenever they wear uniforms and make those video streams public, too (maybe with a few hour lag in case there is a tactical need). My 2c.


We can't restrict what they do with something once we sell it to them. Only Congress can. So, we can refuse to sell it to them or make it for them.


Won't they just buy it from someone else?

For whatever values of they, it, and someone else suit the situation.

There is no coherent we to do the refusing.


The rest of "we" can choose not to do business with the part of "we" that helps government subvert society.


>We need movement from the top of every single company in this space to put a stop to these kinds of sales.

What are some historical examples in which industry coalitions of unprovoked top private executives refused government contracts in the name of the civil rights of their fellow citizens?


Enhanced radiation weapons, aka neutron bombs - there was initial excitement followed by a mass exodus - the govt held on for a while longer, but they move slowly - the labs and contractors visibly got cold feet over this


None. They'll happily go along as long there is money to be made.


Would ir LEDs built into sunglasses be effective in combatting this trend. There was an article on hackaday about a person that did that to his license plate holder to deal with red light cameras.


Nah, you just need some cool makeup: https://cvdazzle.com/


This is the best explanation I've ever seen for why hairstyles in sci-fi movies tend to be original to say the least.


Hey, I see they updated the styles at some point. This is awesome!


It seems obvious that, regardless of the legality, it will be used in parallel construction. Once these tools are made, they will be used.

https://en.wikipedia.org/wiki/Parallel_construction


Yet Axon (formally Taser International) is building systems to recognizes faces for law enforcement body cams. [1]

Especially disturbing is that these systems seem to have a much higher rate of misidentifying minorities.

[1] https://www.npr.org/2018/05/12/610632088/what-artificial-int...


>Facial recognition technologies, used in the identification of suspects, negatively affects people of color. To deny this fact would be a lie.

Since, instead of support for this statement, the CEO of this 'company' decided that shaming his reader would be more effective, I'll give you his support. It's an article that he wrote based on studies that show there are significantly higher error rates in gender and race classification in some algorithms, without ever showing why this even matters within the context of how facial recognition algorithms are or would be used by law enforcement. Nor did he show that the specific algorithms being proposed have these biases, and not just some other ones. Nor did he show that race classification is even a result that law enforcement runs.

[1] https://www.kairos.com/blog/face-off-confronting-bias-in-fac...


On a side note - I'm unable to square this fear of watchers with parents happily singing to their kids - "he knows when you've been sleeping .. he knows when you're awake .. he knows when you've been bad or good, so be good for goodness sake"


Their FR is not ready for Law Enforcement, because it is weak, poorly designed, and overly expensive. They are smucks trying to capitalize on Amazon's name with generic, low quality FR. Anyone that understands FR knows to go to the NIST web site and use the companies competing to be the best for government contracts. Their algorithm stats are all tested and the results published at NIST.


Does Kairos actually produce any useful software or does it just exist so Brackeen gets publicity?


decidethefuture.org


Why are all of these articles written under the premise that this software sends you automatically to jail? It’s just a first step in the vetting process.


The Netherlands had various high profile cases where the investigators focussed on one or more individuals. This despite it making no sense at all. At least two people ended up in jail for 10+ years after multiple years of being investigated. Various of these cases were shown on a television program, see https://nl.wikipedia.org/wiki/Peter_R._de_Vries#Bekende_zake... (Dutch). It often took enormous effort to get these people out of jail, see e.g. https://nl.wikipedia.org/wiki/Puttense_moordzaak (Dutch).

I noticed that I forgot the details a bit. It's worse if you read above links. Anyway, you really do not want to become a suspect!


“People have been mistakenly incarcerated in the past so fuck having laws and cops and shit”


That's clearly not what they are saying.


He's pretty fairly mocking this answer, though. Anecdotes are not an answer when it comes to something this prevalent. I can find an anecdote for almost anything to make it look bad.


An anecdote is a logically valid response to "This isn't happening". "Yes it is, because here's a case where it did" is a valid counter. It doesn't prove that it's happening systematically, but it does prove it has happened, and from there it is valid to discuss whether it might happen again.


> An anecdote is a logically valid response to "This isn't happening".

Well, choosing to respond to a claim that wasn't asserted isn't a valid argument against the original assertion.

> and from there it is valid to discuss whether it might happen again.

This again is working at 'anecdote scale'. Unless you can show that the costs of these anecdotes are anywhere near the collective benefits, then it's worthless. Keep in mind, it's equally worthless to assert there actually are benefits in this system without support, but that's why I'm not asserting that. I'm just saying an isolated argument on one side or the other, especially when it just a fucking anecdote, is functionally meaningless.


Even being stopped for 20 minutes while the cops check your papers is an inconvenience and often an indignity.

And if it happens regularly - a man who looks like Bin Laden today will look just as much like him tomorrow - that would be a major inconvenience.

If you think government departments and the tech companies they outsource to are too competent to repeatedly hassle innocent people due to mistaken identities, just look at the no fly list. [1]

[1] https://en.wikipedia.org/wiki/No_Fly_List#False_positives


I think the even bigger problem here is that if face recognition software is malfunctioning those who have been identified by mistake are suddenly subjected to an investigation that can pose serious threat to their privacy.


Not just "privacy". I would imagine some judges will issue search or even arrest warrants based on the information ("it's science!"), which might mean in the worst case the police might roll up to your house, shoot your dog, scare your children, take you into custody (even without an arrest warrant if you're somehow deemed "resisting") or even shoot you because they got scared of how you look, you may lose your job because you cannot show up to work while you're in jail, your neighbors will think you're a hardened criminal, etc.

Even if the courts do not convict in the end, a lot of damage might already been done.

The pseudo-scientific hair analysis stuff performed by the FBI showed the dangers of "science" and "tech"[1]. People went to real prison because investigators, judges and juries overestimated the flaky results the sometimes outright negligent pseudo-science produced. I imagine some people were shoot during arrests based on that evidence, died in prison or committed suicide.

There also was the Phantom of Heilbronn here in Germany, where police looked for a master criminal and serial killer for ages (2001 to 2009), but it turned out the materials they used to do DNA swaps at crime scenes were contaminated at the factory by a worker[2]. For years nobody of the many involved in the investigations even considered questioning the DNA results.

So even if the science and tech is sound (which it really isn't in case of facial recognition yet, if ever), wrong application, common mistakes, and misunderstanding the results are real problems.

[1] https://www.fbi.gov/news/pressrel/press-releases/fbi-testimo...

[2] https://en.wikipedia.org/wiki/Phantom_of_Heilbronn


There's a lot of assumption about future behavior here.

I think there are a couple reasons to be hopeful about court systems taking a more nuanced view of this technology: 1) it cannot be denied that it is imprecise, and 2) it's pretty easy for laity to understand (at least in principle) how it works. DNA evidence is effectively magic by comparison.

I actually think the imprecision is an asset for this technology. I would much rather this tool be 95% reliable than 99.99% reliable. The former inherently requires law enforcement to work harder; the later tempts "oh the machine said so it must be right".


People went to prison over expert testimony from FBI scientists over hair analysis in the past (not just warrants, actual convictions).

Like this guy, who spent 23 years in prison because FBI told the jury some hair they found was his... while it was actually not even human but dog hair.

>An FBI analyst testified that one of the hairs from the stocking mask linked Tribble to the crime and “matched in all microscopic characteristics.” >Tribble’s attorneys were successful in obtaining mitochondrial DNA testing on the 13 hairs recovered from the stocking mask. None of the hairs—including the alleged match—implicated Tribble or Wright. Further, the analysis revealed FBI analysts’ errors, including mistakenly calling a dog hair human. https://www.innocenceproject.org/cases/santae-tribble/

As for face recognition, I wonder if you're correct. I'd expect a lot of people think it's very precise. Well, at least the "My Sibling can unlock my iPhone with their face" debacle might have generated some press to combat that misconception.


If it’s a spectacular failure (someone who doesn’t like at all like the criminal) he’ll be let go right away. If there’s a reasonable resemblance, it would be as if some person reported the guy manually to the police.


You have a lot more faith in the justice system than others.

NB that being arrested once can hinder someone from getting Visas or jobs in the future and can result in social exclusion.


I’m talking about the first world, where arrests aren’t public data.


>The general rule is that arrest records are public records. However, each state can determine whether they wish for such records to be readily available to the public. [1]

>Arrest records are generally open to the public unless they concern an active or ongoing investigation.[2]

>Since the arrest record is public, anyone can access the information by going to the jurisdiction’s government website. Also, anyone can obtain the arrest record by going to the county clerk’s office in person.[3]

[1]https://www.hg.org/article.asp?id=36914

[2]https://www.rcfp.org/private-eyes/arrest-records

[3]https://www.legalmatch.com/law-library/article/what-is-a-pub...


When traveling to e.g. the USA arrests have to be declared on the online visa waiver. Similar for other countries. I have no idea what happens if you have any arrests but I assume it is not totally convenient. Even if your record might be sealed/not public in your countries does not mean you can ignore this section of the form.


In the US - a first world country, arrests are often public data, Although it varies from state to state:

https://sunlightfoundation.com/2016/02/01/the-perils-of-pers...


I have no personal experience of this, but try googling for "ESTA arrest" (without the quotes).


Arrest data is public and companies often buy it so they can build background reports :)


Sure, the classic "I have nothing to hide so..." argument. Sounds great until disagree with a government policy labels you a threat to homeland security.

There is a quote but I forget from whom that basically says "if you follow anyone for 30 minutes, you'll witness them commit a crime". Although it is likely to be a traffic violation, that doesn't change the fact that constant surveillance would "catch" you soon enough.


Because the American police isn't known for SWATTING and killing the wrong targets because of "identification errors."


So, you think that giving them access to fewer resources designed to help identification is the better way to go?


Yes, if there are concerns about those resources misidentifying people and being subject to racial/ethnic bias


lol tell that to china


Could you please stop posting unsubstantive comments to Hacker News?


"We lost the contract to Amazon"


This is a data retention/sharing issue, not a technology issue. Nobody cares if their face, license plate or cell phone is seen and mapped to their identity as long as the interaction and its metadata is soon forgotten. It's the storage of this information that causes the problem.

When police can accurately determine who they're looking for based on their appearance and can quickly determine whether the person they're bothering is that person they are more limited in how they can harass people in the present. If no records are kept big brother is more limited in his capacity to harass people in the future.

Most people don't have any warrants out, haven't recently been recorded on a security camera robbing a liquor store, etc. Being able to ID people reliably without stopping them is the last thing law enforcement wants (let's ignore the three letter agencies for a minute here) because it makes it much harder to stop people for being "suspicious" or "looking like a drug dealer" (license plates provide similar protection for cars, they can't just stop every silver Camry because a silver Camry was once stolen).

License plates and cell phones almost map 1:1 to your identity. Nobody has a problem with license plates or phones as long as they're not used to stalk people at scale. It's the creation of the data-set that can be used to stalk people. We need to stop creating these data sets in order to prevent the stalking at scale (which is the real issue here, even the article says that).

Imagine a world where the speed limits were followed like normal laws (i.e. they were reasonable enough that most people didn't have to go out of their way to follow them). The police in that world would hate radar guns because they're accurate (and can be audited for accuracy) Big brother would hate them because they don't record metadata for each reading. Facial recognition, ALPRs, need to work like that. History has proven time and again that you can't keep technology in the closet. Better it be used on our terms than theirs.

We're going to have to confront the data retention and government/commercial stalking/surveillance issue eventually. Getting angry over APLRs, stingrays or facial recognition is just playing whack-a-mole. I'm still gonna play whack-a-mole until we solve the big problem though.

Edit: If people disagree I'd love to hear why. This isn't Reddit.


I imagine people are disagreeing with the way you describe it as a software problem when the firmware in those devices will be unverifiable, batteries will conveniently run out, and they surely won't be 100% tamper proof.

Just type in 'alexa google recording' and see the news of them being used.

Aside: Everyone I've described the securus scandal to has a problem with phones.


I followed the linked article sited in this post - "Face Off: Confronting Bias in Face Recognition AI" which states:

"Fortunately, the matter of algorithmic ethnic bias, or “the coded gaze” as Buolamwini calls it, can be corrected with a lot of cooperative effort and patience while the AI learns. If you think of machine learning in terms of teaching a child, then consider that you cannot reasonably expect a child to recognize something or someone it has never or seldom seen. Similarly, in the case of algorithmic ethnic bias, the system can only be as diverse in its recognition of ethnicities as the catalogue of photos on which it has been trained."

As this is extremely concerning. I had some questions:

Are the training sets for company's selling facial recognition technology considered proprietary and therefore not verifiable as free from bias?

Is it not possible to develop a standard or criteria that a company's training set has sufficient distribution to be free of racial bias? Is this just not possible for some technical reason?


There is not a single statement in that article that is not some type of fear implied concept that fails under observation. The "FR is biased towards people of color" issue was publicized a few years ago, and the industry adjusted - most are no longer biased, and that bias was a mathematical color space issue and nothing racial. It's an empty article cut and pasting fear statements from popular media, and simply Kiaros getting their name in the news.


> that bias was a mathematical color space issue and nothing racial.

What is the difference? Bias towards a color? What are you saying?


The mathematical bias was the fact that dark skin has less illumination range, therefore less information to separate dark skinned faces. The bias was not such that a darker skinned person would be confused with a different toned person, it would have issues telling the difference between people with the same darker skin tone and similar facial shapes.


The bias will also have biased consequences, like for 1 dark skin criminal you search for you will find more suspects with a much higher probability or score then a light skin person, this would mean more arrests/interrogations.

I would also not blame the math but the implementation aka the coding/modeling of the problem.


which has been corrected.


Technically speaking, face recognition has been a solved problem since the early 2000s despite facing few challenges nowadays including non-frontal, tiny or partial real-time detection. The most common algorithm is to output the bounding box (i.e. rectangle) coordinates for each present face, extract the facial landmarks (nose, pupils, mouth, etc.) points for the target face (the more landmarks you have, the more accurate the result is), perform some Euclidean distance calculation (sort of hash) and compare this to the existing face hashes (i.e. police database). The highest score is considered a potential match.

You can develop your own face recognition software providing a good dataset (i.e CelebA, LFW or the fed dataset if you have access to) using our embedded computer vision library[1] that we just released earlier this month or using the PixLab /facecompare HTTP endpoint (https://pixlab.io/cmd?id=facecompare).

[1]: https://github.com/symisc/sod




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: