Hacker News new | past | comments | ask | show | jobs | submit login
A one-year moratorium on police use of Rekognition (aboutamazon.com)
603 points by robbiet480 on June 10, 2020 | hide | past | favorite | 286 comments



The ACLU is doing a lot of great work to hold government accountable when it comes to facial recognition tech.

https://www.aclu.org/press-releases/aclu-challenges-fbi-face...

https://www.aclu.org/press-releases/aclu-challenges-dhs-face...

Would be great to see Amazon's support.

The ACLU ran an experiment with Rekognition and these are their findings:

"Using Rekognition, we built a face database and search tool using 25,000 publicly available arrest photos. Then we searched that database against public photos of every current member of the House and Senate. We used the default match settings that Amazon sets for Rekognition.

... the software incorrectly matched 28 members of Congress, identifying them as other people who have been arrested for a crime.

... Academic research [0] has also already shown that face recognition is less accurate for darker-skinned faces and women. Our results validate this concern: Nearly 40 percent of Rekognition’s false matches in our test were of people of color, even though they make up only 20 percent of Congress."

https://www.aclu.org/blog/privacy-technology/surveillance-te...

[0]: http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a...


I tried to recreate this using Rekognition. I used a public mugshot database. Here are my results [0] if you're curious about what these false matches look like.

[0] https://medium.com/ml-everything/how-facial-recognition-work...


I disagree with this statement,

> However, just from a cursory review of the mugshots, people of color were disproportionately represented in the mugshot database. So it’s not entirely fair to criticize the facial recognition technology for matching more people of color.

No, the point is the disproportionate representation, and it is a fair criticism, because it is a fundamental limiting factor in the use of the technology. You seem to be distinguishing the "technology" from the data source, but that is not possible. The quality of the technology depends on the quality of the data provided to it.

> I believe a curated list of mugshots with certain characteristics would result in a similar representation of mismatches. Nothing I have read about the technology suggests that there are inherit [sic, is inherent] bias.

This bias is in the data, not the algorithm per se. This is quite accepted in many, even most, criticisms of machine learning applications in the social sphere. It comes up a lot for example in NLP models, for example with assumptions on gender for professions.

It is a technology problem because it demands a technological solution, because removing any and all bias by hand-curating datasets is just not a scalable approach.


I feel like we're missing the popular vocabulary to describe some of these recurring problems.

The algorithms reflect and amplify biases in the data. They also go into feedback loops once they affect the reality being represented in data.

Imagine that an algorithm selects individuals at an airport for search. Contraband smugglers are caught. The dataset now includes them, evolving the bias. We have already seen this at play on social media, recommendation engines, fraud detection, etc.

The future is worrying. There are ratchets on technologies such as these. Unless there are extraordinary counter-reactions, they will be "affecting their own datasets" en masse very soon.


The term I've heard for this phenomenon is "mathwashing."


The old chestnut for this is "lies, damned lies, and statistics."


The problem of disproportionate representation is a problem whether you're using technology or not. It's not a new problem or specific to tech. If a large percentage of dangerous interactions a police officer has with an individual fits a certain profile (e.g. the third party was a man), the police officer will likely exhibit a different set of behaviors when dealing with that party. Sometimes it's appropriate, many times it's not. Same is true for judges and others in the system and you'll see stricter sentencing for certain groups. This results can lead to a self-reinforcing cycle of oppression.

The difference is with technology these biases can at least try to be addressed directly and create an objective more fair system.


The major issue is that the tech is not ready. If you reduce the false positive rate by 5% but you facilitate processing 20% more claims (to use some arbitrary numbers) you are still increasing the absolute number of false positives by 14%. With the punitive American justice system, that is 14% more people who will be detained unjustly, and then when they leave they will be denied most opportunities for housing, jobs, and other things needed to participate in society.

Technology can address biases but we need to make sure that they are addressed sufficiently, so that we do not end up causing more problems than we solve.


> If you reduce the false positive rate by 5% but you facilitate processing 20% more claims (to use some arbitrary numbers) you are still increasing the absolute number of false positives by 14%.

This is a really interesting point that I'll keep in the back of my mind, thanks for that! I hadn't even thought it through in such a concrete manner but I think this really hits the nail on the head.


Some of those false positives are laughably similar. I have no doubt an LEO would be convinced they have the right suspect.

I actually think it’s probably better than most people’s naked eye recognition in a crowd.


Yeah, seems to me like the real problem is that people actually have a lot of doppelgangers, and applying facial recognition to the entire population, even if just as accurate as a real person is just not a useful thing to do.


So this is a good application of the birthday paradox actually and the fundamental misunderstanding people have of this is already a real problem in law enforcement.

Take DNA testing. We all know that given two samples the odds of a false positive are incredibly low. So DNA testing is a great too for eliminating people who might otherwise be suspects or further evidence against the guilty.

The problem is that given this DNA database, someone decided "let's find our suspect by looking for a match". With 1 in a billion chance of a random match and 100M samples, your chances of getting a false positive are really high.

The problem is that there are instances where a DNA match alone is used to prosecute or even convict people.

In recent years this problem has gotten worse due to the rise of familial DNA matching. Given two samples, we used to only have the ability to say if they were a match or not. Now we can say how much of a match they are. How much of a partial match is enough? What's more, you may be implicated by the stored DNA is relatives.

Facial recognition is far more imprecise than DNA. So yeah I fully expect this to get abused by prosecutors and law enforcement.


It's something like the "rare disease" paradox in medical testing and sensitivity and specificity.

If I increase the population I test to include any warm body, eventually I'll end up with more false positives than real positives.

If you are looking for one male suspect and comparing them to all the male faces (let's assume it gets gender correct) in the us, with a 99.9999% (six 9s) accurate algorithm you would get something like:

1 true positive 182,499,818 true negatives 182 false positives

Broad scale facial recognition is just an outright stupid thing to do as a sole measure of identification without the use of other information.


Yes, I think in the modern world with these methods and massive data sets being used in law enforcement, medical sector etc, the people working there should undergo a lot of training about how basic Bayesian inference works etc. We are otherwise in a world of hurt. Healthy people being told they have some really serious rare diseases, maybe even starting treatment, innocent people being prosecuted or killed during raids etc.


Yeah... training almost assuredly won't happen sadly. Especially for cops.


I teach this stuff (biomedical engineering statistics) to undergrads.

The hardest thing for them with probability is false positives. conceptually it does not seem to align well with how the human brain works. I've had students literally get so frustrated the cry because they know what the math says but can't accept that you can test positive for a disease and not have it. They know they are wrong but it's just this weird sticky misconception.


I think you nailed it on the head with, it is 'just not a useful thing to do'.

This seems like a clear example for why facial recognition is a technology that is just not 'solved' yet. The appearance of people's faces, especially from similar ethnic backgrounds, is just too similar for a ML model to parse out with any confidence.

I have noticed this in real life. As I get older, I notice it more and more. I'm sure many of you all have too. There are very distinct patterns, or 'buckets', that human faces tend to fall in. I think our brains tend to naturally categorize them accordingly.

It is probably subconscious. I might not be able to articulate it, or put a definite 'name' on a group. But I know I am constantly seeing patterns of faces in public. People I don't know, and have never met, but they remind me of other random people I have seen in public. Or maybe they remind me of a popular celebrity that everyone knows.

Either way, something goes off in my head. I can't help but think to myself, they must have some sort of similar lineage, or genetic background. I subconsciously categorize them into a bucket with others I've seen.

I imagine this is similar to how Rekognition, and other models work. I thought the blog post from the parent commment @bko, is a fantastic example of this. It is actually amazing, when you think about it, that the ML model can match these faces up as well as it does.

To the naked eye, it is clearly not the same person. Rightly so, considering all the images were in the range of 70-80% confidence. But many are remarkably close. I think this illustrates, the concept I am trying to describe. You can notice it, even with the naked eye.

All of this rambling is to say, I agree with Amazon's moratorium on Rekognition.

As impressive as the technology is, it should probably not be used to try pin-point specific individuals yet, or whatever else folks might be erroneously trying to use it for. If we are to trust facial recognition to identify specific individuals, it should probaly be approaching near 100% confidence, and I imagine that level of confidence is a long ways off.


I sorta think about it as "what's the likelihood of someone winning the lotto twice in a year?" Certainly the probability that any specific person wins twice (assuming here these are independent events...) is so ridiculously small. And yet probability at least 1 person somewhere is that lucky is incredibly high!

Now ask if my neighbor looks like any random person on the street, sure, it's small. But that chance we find two people somewhere that look very similar is incredibly high.


I think the interesting question is what is the chance of doppelgangers given population size and genetic similarity of the population.


Sounds like the prosecutor's fallacy to me unless great care is taken to also consider lots of supplementary information when performing a match. (https://en.wikipedia.org/wiki/Prosecutor's_fallacy)


Recommend reading the part about Sally Clark. The last two sentences are an incredibly short and understated horror story.

"Sally Clark was a practising solicitor before the conviction. After her three-year imprisonment she developed a number of serious psychiatric problems including serious alcohol dependency and died in 2007 from acute alcohol poisoning"


It's only not a useful thing to do if you care about innocent people being wrongfully convicted, which is as you look at the US justice system[0] is debatable. From an outside perspective the US justice system seems to care more about the false negative than the false positive rate of conviction.

There is already a whole slew of dubious methods[1] (bite mark analysis, blood splatter analysis, fibre comparison) being used to put people in prison/to death, these things are known to be bullshit, but they are still being used. I really think we as technology-literate people should fight as hard as we can against the introduction of facial recognition in the justice system because once it becomes common place it will be really hard to undo it.

[0]: https://www.innocenceproject.org [1]: https://www.latimes.com/opinion/op-ed/la-oe-humes-forensic-e...


Denmark has a fairly homogeneous population (discounting the much needed immigration) I swear there are basically 5-8 Danish feature sets that sometimes I amuse myself seeing how they are put together.

This of course means that there are doppelgangers or near doppelgangers all over the place. Even funnier, when i first moved back here from the U.S I was staying at a place in which there was this guy who looked exactly like one of my friends from the U.S, even down to facial expressions and mannerisms (maybe that was because he was always extremely stoned).


Yeah but I am not sure it is suggested to be used in any other context than where a human would be comparing faces too. Ie searching for someone in a crowd or cctv, or checking a passport picture against someone at the gate. None of that is burden of proof level of accuracy. But it just allows to better cover a crowd or process more people at the airport.


If you had accuracy similar to a human, and assuming this is an existing procedure humans do use, then of course it's useful -- you're doing the same task cheaper/faster.

Presumably you'd also use the same techniques as used by humans today to narrow down further, like taking into account location, time, etc of match


Doing the same thing but faster and cheaper doesn't necessarily mean it's better. If there's a high cost to a procedure, then the user would need to show the value of checking a candidate in some other way for it to be worth the effort. Making it easier and faster means you're more likely to check against random innocent individuals.


Fast, cheap, good. You can only ever choose two.


Often, you are lucky to get one.


Seems like the equivalent of p-hacking to me. Sure they could apply complex queries to narrow down the suspects in a fair way, but I don't trust them to do that.


> you're doing the same task cheaper/faster

Move fast and break lives.


I agree. It's incredible that the mismatches were only between 70-80%.

For comparison, I did another test where I took one of those youtube videos that shows you the same person's face over ten years. I used the original 12 year old boy as the image I'm using to match. Over 1,300+ images, it had > 70% confidence in all but 4 images (2 had big sunglasses, one the guy was in green-face, and the other one was actually his wife). And this is from a single picture that's ten years old.

The first part I looked at an open-source facial recognition model that did considerably worse.

https://medium.com/ml-everything/how-facial-recognition-work...


Really? They all look very different to me.


I see similarities in most, especially if I take into consideration that one picture could be "age progressed" image.


> Playing tricks using statistics is not new. OJ’s defense lawyer argued that despite OJ being an abusive husband, it is statistically unlikely that he killed his wife.

> > Only one in a thousand abusive husbands eventually murder their wives

> The more pertinent question is what percentage of murdered women were murdered by their abusive ex-husband?

Shouldn't this be "what percentage of murdered women with an abusive ex-husband were murdered by said ex-husband"?


Statistics is only capable of predicting patterns across a sample of data. It is pretty inaccurate when you try to use it to predict a single data point. The Lawyer knew that but also knew a ton of people don’t understand stats


Your remark on the racial bias is what also came to my mind immediately. It would only be fair if the percentage of mugshots that are of people of color in their 25k sample was published.


This isn't about being fair to the algorithm, it's about being fair to the people subject to the algorithm's judgement


Which is a distorted discussion already. Instead of questioning the benefit of the technology for policing on the whole it is already shifted to the issues of bias. In the end the algorithm and the data will of course have bias in some form but that isn't even important at that point.

Yay, we have an algorithm judging people, but is it fair to Canadians? Completely off...


This isn't distorting the discussion at all. From the point of view of the ACLU, new technology should only be adopted by policing if it can be proven to not perpetuate or exacerbate existing problems of bias, which IMO is a totally reasonable position. We don't accept the "move fast, break things" ethos in fields like aviation, for example, and if we only listened to technologists, society would be too cavalier about collateral damage affecting innocent people. One could argue that society is already too cavalier about this.


I disagree because you already anticipate large scale deployment with that argument.


Facial recognition AI products were already being sold to police departments and customs agencies. Large scale deployment is in the process of happening. Maybe not at your specific agency, but the ACLU is a national organization, and in their eyes one additional false positive caused by increased efficiency is one too many.


Shouldn't matter. What we really want is a classifier that does not take skin colour (or ethnicity) into consideration as a feature.

I know how ML works but I believe we should demand better from these technologies than the limits of our own biases.

Right now, this machine learning algorithm is apparently about as smart as a bigot arguing "yea but percentages show that crime is in fact higher among blacks!". It mainly shows how systemic the racism is, that a dumb ML algo picks up on it.

This is not solved by showing the bigot less statistics about black crime, but by showing them how to pull their head out of their ass.

We should expect no less from our ML technologies, otherwise you'll keep running behind the facts, always fixing errors after they have been learned and made.

Yeah that is hard and we have no idea how to approach it. But the alternative appears to be writing computer programs with the reasoning skills of a racist cop.


Does it matter? More than half of the world's population are people of color.


It matters for the ACLU because it's an American organization focusing on American injustices; in the US, 73% of people are white, 12.7 are black, and 17.6% hispanic.

You're using the strawman and whataboutism logical fallacies, but you're not actually making a point.


Your statistics are misleading. You put hispanics in twice, since the non-hispanic white population of the United States is around 60%.


As someone who hasn't explored the service, thanks for taking the time to do this.


The bearded guy `deb_haaland.jpg` being matched with a non-bearded guy is interesting; shaving would totally fool me, but looking at the match, it definitely looks plausible. Good job, Facial Recognition AI!


Deb Haaland is the lady underneath. The bearded guy is Rick Crawford.


> Would be great to see Amazon's support.

From one of the links on the left of the article: https://blog.aboutamazon.com/policy/amazon-donates-10-millio...

> Update, June 9: Since announcing our $10 million donation, we’ve heard many employees are making their own contributions—and we’ve decided to match their donations 100% up to $10,000 per employee to these 12 organizations until July 6, 2020.


I was thinking filing amicus briefs in support of the ACLU in the cases mentioned above but this is helpful too.


I'm a big ACLU supporter, but thought this was poorly done. They never released either the database or the code for this testing, and had configured the recognition level against Amazon's recommendations.


In the ACLU's posts, they say that they used the "default match settings". In their response [0] to Amazon's response, the ACLU links to a guide published by Amazon [1] intended to "Identify Persons of Interest for Law Enforcement" that does use `searchFaceRequest?.faceMatchThreshold = 0.85;` (this is still the case today).

Fast Company [2] writes about this as well: "The ACLU in both tests used an 80% match confidence threshold, which is Amazon’s default setting, but Amazon says it encourages law enforcement to use a 99% threshold for spotting a match." That bit of the article links to the CompareFaces API documentation [3] which (still) states "By default, only faces with a similarity score of greater than or equal to 80% are returned in the response".

Have you seen/read something else about this?

[0] https://www.aclu.org/press-releases/aclu-comment-new-amazon-...

[1] https://aws.amazon.com/blogs/machine-learning/using-amazon-r...

[2] https://www.fastcompany.com/90389905/aclu-amazon-face-recogn...

[3] https://docs.aws.amazon.com/rekognition/latest/dg/API_Compar...


My guess is: Amazon writes that 99% is the recommended threshold because there is little chance of false positive. If an agency is deploying this system and complains that it isn’t working well, a solutions architect will say in a meeting (but not in writing) to lower the threshold. If it comes out that false positives occur, AWS isn’t responsible.


True, but one can't blame Amazon in this case. The agency can reduce the threshold to 0% (ad absurdum) and claim that Amazon's technology is not working.


You can blame Amazon if there's no point on the threshold scale where it works adequately.


I guess one must define "adequate" first. The default value of 80% could work fine if you are developing a "find your celebrity doppelgänger" game while law enforcement should probably use 99%.


I think you'd have a point at which the threshold increases but the probability of the true positive being in your results starts to drop severely. 99% might be useless if you have one or two hits and they are unlikely to be correct. You can't assume that the one you're looking for will be a 100% match; if it was, then you'd just set the threshold to that, presto.


>Fast Company [2] writes about this as well: "The ACLU in both tests used an 80% match confidence threshold, which is Amazon’s default setting, but Amazon says it encourages law enforcement to use a 99% threshold for spotting a match

Then this whole thing is potentially misleading because there's a huge difference between 80% and 99%. It's probably nonlinear and they could possibly see their false matches drop to 0. This is not a fair test - or rather, the conclusions are not quite supported by the parameters.

Not that I'm defending police use of facial recognition tech, I think it's abhorrent, though possibly inevitable.


They made a facial recognition tool available to law enforcement and in the marketing it says "requires no machine learning expertise to use" then I think it's fair to look at any value of the threshold parameter they make available. Especially a parameter that, by changing it, will give you the answer you want more often.

I'm deeply troubled by the text I've seen here implying this threshold is some accuracy percentage or positive predictive value percentage. Unless God is working behind the scenes at AWS they can't make any claim about the accuracy of the model on an as yet unseen population of images.

That's even before getting to the more esoteric map vs territory concerns like identical twins, altered images, adversarial makeup and masks, etc.


Just to make sure I understand, which "whole thing" is misleading? The ACLU's test? Amazon's response?

As for the test, you say it's not a fair test. The point / conversation right now seems to be about the choice of parameters used by the ACLU. As far as I see / understand, the ACLU used the default parameters (and/or those recommended in the documentation / articles that are still up today with those same non-99% values).

What would have been a better / fairer test?


What are police departments using? My uninformed guess would be not 99%. I think therein lies the concern...


My cynical guess would be "whatever the lowest number they can get away with using".

I would bet good money that cops KPI goals benefit from false positives, since they'll reward higher "number of identified/interviewed suspects" and "number of arrests" as a positive thing even if "number of convictions" doesn't line up.

Even more cynically, I'd bet this is a powerful technique for ambitious cop promotion, and that there's little blowback on fraudulently manipulating parameters that adversely affect POC much more significantly that white people.

Thinking about it, I'm now recalling the multiple reports of police departments claiming to not be using clearview.ai, only to have to backtrack when clearview's customer data got popped and it became public knowledge that individual cops were signing up for free trials - which their department/management either chose to hide or didn't know about. That's reasonably compelling circumstantial evidence to me that ambitious cops are quick to jump on unproven and unauthorised technology with insufficient or oversight or with management actively avoiding oversight for them...


In regards to the KPIs this is a known reality. Most states get money from the federal gov highway safety program. Then the states disburse it to local police depts, and the expect high numbers of citations (or even warnings) to be reported back up the chain. It is only for DUI that verdicts are considered, and that's only amongst the smarter states. Related to crime, there are NO KPIs based on the final outcome - all on the elements the police are able to carry out and be accountable for on their own. This makes sense in some ways beyond self promotion. I will say also that the general inflation of KPIs in order to justify promotions, grant renewals, etc is RAMPANT in state and local govs, but especially in policing when it comes to new tech investments and promotions


If they can turn the knob, why wouldn’t they? This stuff isn’t admissible in court, and you can sweep for potential matches to follow up on.

If the default is 80, most will be 80. The SE may say “I’m told to inform you that you should use 99.”, but I’m sure he is winking.


Wouldn't it be more likely that they say "ok, we can interview/investigate/whatever X number of people" and then they adjust the threshold to produce that number? If 80% gives them 10,000 hits and 99% gives them one or none, then nobody is going to just go with either setting.


I'd guess with the potato quality of facial pictures from incidents security or phone cameras, you might want lower confidence matches to get outcomes out of lousy pictures.


> had configured the recognition level against Amazon's recommendations.

Citations?

My understanding was that the ACLU used the default settings.

July 26, 2018 — Amazon states that it guides law enforcement customers to set a threshold of 95% for face recognition. Amazon also notes that, if its face recognition product is used with the default settings, it won’t “identify[] individuals with a reasonable level of certainty.”

July 27, 2018 — Amazon writes that even 95% is an unacceptably low threshold, and states that 99% is the appropriate threshold for law enforcement.

https://www.aclu.org/press-releases/aclu-comment-new-amazon-...

Either way, the defaults are the problem if the application is law enforcement.

"Defaults have such powerful and pervasive effects on consumer behavior that they could be considered “hidden persuaders” in some settings. Ignoring defaults is not a sound option for marketers or consumer policy makers. The authors identify three theoretical causes of default effects—implied endorsement, cognitive biases, and effort..."

https://journals.sagepub.com/doi/10.1509/jppm.10.114


I agree. I wish the ACLU would re-run the results at 99%. But Amazon's example post about law enforcement has it set to 85%.

I don't think this 99% thing is communicated properly at Amazon if it's getting through blog posts like this.

So I think a valid criticism is that we need to make sure that it's higher.

https://aws.amazon.com/blogs/machine-learning/using-amazon-r...


Do we know police actually use 99% ?


Nope and I doubt they do. But a test of 99% would better for Amazon/ACLU to take on.


I do not understand, a fair test is to replicate reality. Also I am wondering if you have for each city a different software package with it's own config or IT guy that tweaks the config


I posted in another comment, but I tried to recreate this using default 70% match. The dataset was 440 images of congressmen and 1,756 mugshots. There were ten mismatches between 70-77% certainty

https://medium.com/ml-everything/how-facial-recognition-work...


It seems realistic other users would ignore Amazon's recommendations for proper configuration.


Specially given that turning the parameter _down_ will give you more matches. This is great for demos. "No match found" is not so great.


If so inclined, support the ACLU:

https://action.aclu.org/give/donate-to-aclu


The ACLU used to be on the top of my list of my personal favorite charities, but I've found they've become significantly more reactionary in the last few years, prioritizing emotionally outraging causes over utilitarian causes, so don't donate to them any longer.

With that said, some of their work is still great, and I'm thankful for it.


I'm scared of that outcome. I'm glad you donated to them in the past so you obviously valued them at some point. Do you have articles to back that up? I'm not trolling; I wonder if this is the message that the "powers-that-be" want us to think.

Remember how McDonalds was the victim of a baseless lawsuit? Well, it wasn't actually the case, but that sure benefitted corporations who can now assert most lawsuits against them are frivolous.

https://www.caoc.org/?pg=facts


>Ira Glasser says the organisation he once led has retreated from the fight for free speech.

The ACLU would not take the Skokie case today’: https://www.spiked-online.com/2020/02/14/the-aclu-would-not-...

Former ACLU board member Wendy Kaminer:

The ACLU Retreats From Free Expression: https://www.wsj.com/articles/the-aclu-retreats-from-free-exp...

to complaints of sexual violence. We will continue to support survivors.”

The ACLU Declines to Defend Civil Rights: https://www.theatlantic.com/ideas/archive/2018/11/aclu-devos...


Having read extensively on that case, my opinion changed. I now consider the lawsuit baseless and the verdict primarily the result of a sympathetic plaintiff and an unsympathetic defendant (the stereotypical sweet old lady vs the stereotypical evil money-grubbing mega-corp).

Their coffee is just as hot nowadays and to lower the temperature to the degree to where the effect on Stella would have been meaningfully different would result in lukewarm, under-extracted coffee that fewer people would be interested in. Further, the sheer quantity of Mcdonalds coffee moved every year without incident implies user error, rather than product error.

Yes, I know what the jury said and how they divided up the blame. I disagree with their conclusion.


What you say about the coffee temperature is true as far as I've been able to determine, but didn't McDonalds change their cup design in response to the injury? From what I understand, the cups they used at the time were prone to collapsing.


> and to lower the temperature to the degree to where the effect on Stella would have been meaningfully different would result in lukewarm, under-extracted coffee that fewer people would be interested in.

Citation needed.


Hot water systems in homes are generally set to not go hotter than 120F, because much hotter than that could burn somebody in seconds. 120F is a bit more than "lukewarm" (which I take to be an exaggeration), but is nevertheless cooler than anybody serves standard coffee at. 140F (60C) water can give you third degree burns in 5 seconds.


(disclaimer: quick back of the envelope research, mistakes might have been made, void where prohibited, kids eat free)

Stella's own doctor testified[1]:

>Lowering the serving temperature to about 160 degrees could make a big difference, because it takes less than three seconds to produce a third-degree burn at 190 degrees, about 12 to 15 seconds at 180 degrees and about 20 seconds at 160 degrees.

The NCA (a coffee industry group) recommends[2] holding at a temperature of 180-185, due to "rapid cooling", and consuming at or below 140.

Stella's injuries were exacerbated by:

* The hot coffee permeating through thin sweatpants and being held against the skin.

* Her age - 81 years old at the time of the injury. Older skin is damaged more easily[3], and would also have implications for her mobility (how fast she could remove the soaked sweatpants)

Some experiments [4] show that coffee served at 180 will cool to around 162 in 5 minutes, 148 within 10, and 138 within 15. 70% of Mcdonalds business is through the drive-through [5], so must customers would be getting their coffee to go.

The question I'm unable to find a satisfactory answer for is how long it takes the average customer to receive their order and return home. I could probably figure that out if I knew how far the average customer was from their store, but that information is not readily available.

If holding at 180 results in optimum drinking temperature of around 130 in about 15 minutes (per 4), then this is the optimal temperature to hold at for product quality if the average customer lives within 15 minutes of a Mcdonalds.

If you were to hold at 160, using the info from [4], the coffee would fall below this optimum temperature in about 10 minutes and require reheating, which alters the flavor.

[1]: https://web.archive.org/web/20150923195353/http://www.busine... (page 4, bottom)

[2]: https://www.ncausa.org/About-Coffee/How-to-Brew-Coffee

[3]: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3377829/ (It is widely accepted that elderly burn patients have significantly increased morbidity and mortality. Irrespective of the type of burn injury, the aged population shows slower recoveries and suffers more complications.)

[4]: https://www.ukessays.com/essays/mathematics/equation-to-mode...

[5]: https://www.reuters.com/article/us-mcdonalds-meeting-idUSKBN...


Do people really buy Mcdonald's coffee, drive home, and then drink it? I'd have thought most people consume their drive through stuff while still in their car.


Yeah, I do that all the time. (I mean, I certainly do now, but even before the pandemic I would usually wait to get home before eating my drive-through meals.)


I get junk mail from the ACLU, but haven't given anything to them in a few years. They recently sent me an email saying "thanks for your wavering support".

Bringing up Stella Liebeck makes me feel irrationally hostile; I've had a negative reaction to people "busting the myth" of that lawsuit on the internet for probably 20 years.

I certainly believe that suing corporations with deep pockets is a reasonable and moral way to deal with medical bills in a country without universal health care. It's not a good system, but if you can get away with it, why not?

And I know well that on average, the scourge of frivolous lawsuits against corporations is a myth, because I've worked in the legal industry and have a perspective based on many other lawsuits.

And if McDonald's served coffee without a properly secure lid, or some other defect then they should be held responsible for every penny of damages.

However, I am irritated by anyone who may insist that it is a "fact" that serving coffee which is hot (but less than 212 deg) is negligent in itself. And if I continue to see this "myth busted" or "fact checked" for the next 20 years, it's not going to change my opinion, because assuming I live that long, I'm going to be boiling water for coffee on my stove almost every day.


That tea was hot enough to cause bad burns, and did cause bad burns. If you wanna fact check it, heat 12oz of water to 185F and dump it in your lap.

It was known by management to be dangerous; that was done deliberately so that people would be forced to sip slowly, to discourage refills. Ain't negligence in light of intent


When I make coffee it is also right around 180-185F. I checked. I also scald myself with live steam once in a while so I know it's easy to burn yourself. I could cite some industry association's standard for brewing temperature, but I know nobody cares.

The point I'm trying to make is not that coffee should be hot, because I know arguing that is futile.

The point is, that framing this as a disagreement about readily available facts is incorrect and if you habitually interpret people's opinions/values this way, it warps your sense of reality to your own detriment.

I think anyone who wants to prevent me from getting hot coffee is not a nice or reasonable person, and I feel threatened by any implication that I would be in the wrong for making hot coffee if someone else spilled it. But these are facts about me, not about the rest of the world. As such, you don't have to accept them, but you can't invalidate them with facts about the world either.


Its true that the McDonald's case is often presented in a one-sided fashion that makes it look like it was a baseless lawsuit.

But its also true that responses, like the one you linked to are also often one-sided. And what would you accept from an association of lawyers who make money launching such suits?

The truth is, its not as simple as either side tries to make it out to be. I think the Wikipedia article: https://en.wikipedia.org/wiki/Liebeck_v._McDonald%27s_Restau..., does a good job of presenting pertinent details from both sides.


Agreed, I had a much different perspective after reading the whole story.


From your link:

> McDonald’s operations manual required the franchisee to hold its coffee at 180 to 190 degrees Fahrenheit.

> Coffee at that temperature, if spilled, causes third-degree burns in three to seven seconds.

They might also be interested to know it was composed primarily of dihydrogen monoxide, a lethal chemical agent known to be the proximate cause of hundreds of deaths each year. They are describing boiling water.

Even if McDonald's did the wrong thing, it is a frivolous lawsuit. I serve boiling hot tea to all my guests, it isn't negligence. The case as described seems to be that McDonald's should be civilly liable for serving a hot beverage in a styrofoam cup to a customer who asked for a hot beverage and could easily detect it was in a styrofoam cup.


> I serve boiling hot tea to all my guests

You hand styrofoam cups out your window to guests you know won’t be staying? Curious social habit.

> a customer who asked for a hot beverage

Coffee is brewed at very high temperatures but is rarely - if ever - consumed at those same temperatures.

https://pubmed.ncbi.nlm.nih.gov/18226454/

For most people, getting a cup of coffee that is near boiling would make for a very unpleasant surprise.


The whole “tort reform” business came about because injury lawyers were supporting democrats instead of republicans. There was an episode of Citations Needed about it.

https://medium.com/@CitationsPodcst/episode-107-pop-torts-an...


This.

Huge fan of the ACLU in general, but some of the highly unpopular but incredibly important work they used to do, for example defending neo-Nazi groups right to protest, has been discarded in favor of popular causes.

While I understand why they have gone this way (who wants to defend Nazis in court?) it was a very important symbol of the organizations insane dedication to civil liberties. Taking a principled stand of preserving freedoms for those who are deeply, deeply unpopular is inconvenient and essential.


Keep in mind the ACLU has a lot of local affiliates. All to often national orgs absorb the fundraising, while local chapters languish.


I supported them for a long time as well, but they've waded very far from their original waters. These days the ACLU is incredibly politically biased, and the issues they focus on are often non-issues or low-priority ones. I am disappointed to see them, for example, file lawsuits against schools that are trying to ensure that only biological women participate in sports divisions for women, so that those sports are competitive and fair.

The ACLU's social media accounts are a mess as well. Their postings come off as unhinged and sue-happy, and the fan base of commenters has become so one-sided, that I think the ACLU simply caters to that vocal audience now. Maybe the change is not solely attributable to that - there might also be a new generational wave of inside actors that simply operate the ACLU in a more ideological manner.

I agree that some of their work is still great. But unfortunately it's been enough of a change that upon weighing the good and bad, I had to finally pull the plug on my recurring donations too.


Isn't their purpose to be politically biased?

Just because you share the bias doesn't make it unbiased


>I am disappointed to see them, for example, file lawsuits against schools that are trying to ensure that only biological women participate in sports divisions for women

Trans women are women. These schools are denying this with this idea, so it seems like a good use of the ACLU's resources to me. Morality aside, I don't know about the legal aspects of defending that principle in the US, but presumably the ACLU have some ground.

>, so that those sports are competitive and fair.

[drifting off the OP topic, but...] shouldn't school sports be about inclusion? what is your ideal of fairness here? For me, the pursuit and rewarding of certain idealised body types selected by narrow athletic criteria reminiscent of pageantry has never been 'fair' for anybody..

School sports ought to focus on inclusive physical self-improvement, help kids develop cooperative skills, resolve conflicts and work together for a common goal, that sort of thing.


> Trans women are women

Trans women are women from a gender perspective, and not from a biological perspective. Unfortunately, the biological perspective is what the separations of men's and women's sports a thing, because of inherent biological sexual differences (primarily testosterone). I mean, if they want to get around the issue, then let's go all in - disband men's/women's sports and just have sports. Then anyone can play with anyone. That's never going to happen though, for political, cultural, and safety reasons, so we're stuck with a situation where gender is bumping up against biological sex hard.


> Trans women are women from a gender perspective, and not from a biological perspective.

From a biological perspective, trans-people lie in-between male and female. Transwomen are at an increased risk of breast cancer than cis-men while a lower risk prostate cancer. Transwomen (after years of estradiol) have significantly less muscle mass than cis-men, while, transmen have much more muscle mass than cis-women. All of the differences biologically are the result of sex hormones and the time at which they're introduced (pre-natal or puberty) and the time of exposure.

That said, I don't think it's wise for trans-women to play in serious sports. If they win, they won't get the credit, they'll be told it's because they're trans. If they lose, well, then no one cares and it doesn't make the headlines. It's unfair and unjust, but honestly, sports have never been fair or just. Especially the Olympics, they're a selection ritual for celebrating people who are genetically optimized for some specific task. I don't understand why it exists except for out of tradition.


Yes, trans women are women. Agreed.

The challenge is that trans women are women who used to have comparatively vast quantities of testosterone in their bodies, giving them dramatically higher bone density along with all of the other side effects of testosterone on the body.

When they transition fully, the estrogen has a side-effect of preserving the bone density they had previously. Specifically, this gives them a massive advantage in combat sports like MMA. I won't get into the other musculoskeletal aspects that happen.

It's a vexing problem, because I don't like the idea of trans women not being treated like women. It sucks to think about that. It also sucks for non-trans women who are getting their skulls fractured in fights.

The reality is that fans of non trans female athletes aren't going to accept this. When a tiny percentage of the population is trans, and suddenly trans women start winning at the highest levels of female sports at an insanely disproportionate amount, it clearly indicates that being trans provides a massive advantage in female sports. I haven't found any cases of trans men dominating in male sports.

I'm pro trans rights, and also know that my view on this is deeply unpopular in the trans community. I'm not sure what the answer is for this. Sometimes, reality doesn't mesh with our ideals.


I think the answer has to be to find a different way to categorize people into groups for sports competitions.

Would it work to just group people by bone density and weight class?

Or for MMA, just some sort of scale of ass-kickingness. I would have to be in the can't punch out of a wet paper bag class, which I don't think would get shown on TV. ;)


Or you know, have all individuals compete with each other regardless of gender. And if we're not happy with the above suggestion, then we need to define why such a competition would be unfair. Once we can answer that "why" without gender or sex being involved, then everyone will be happy because it'll be as fair as can be had in a diverse society with a range of opinions. That'll include people that are not convinced on the gender/sex debate.

The problem now is that we're being polarized on an emotional issue with both sides being demonized. E.g. One side says the other is hateful of trans people, the other side says trans is being pushed on them unfairly, etc. If we rather focus on the facts and amoral concepts (e.g. testosterone count), and drive opinion/policy based on that, there is good chance that we can all coexist without having to force opinions and understanding on both sides where it's largely unwanted. Only then can we come together and have a single, happy view on the topic as a society.


Everyone discussing this issue needs to be very upfront and clear on whether they're talking about sex or gender. Sports leagues are explicitly about biological sex, and not gender. Your statement is not applicable here and is often used to add confusion and outrage while preventing any forward progress.

Perhaps there needs to be new vocabulary to describe sex and gender separately but that's a different topic. As for fairness, this is the best we can do on a broad scale between male and female because of the vast differences. After that we leave it to the individual to decide which sport best fits their physicality and interests.


>These days the ACLU is incredibly politically biased

I was going to post the same in anticipation of the partisan downvotes that you're receiving. Without going into specifics, this sums it up nicely:

>It’s not that the left shouldn’t have opportunities to speak up against the president’s agenda -- of course it should. But the ACLU shouldn’t be its political bullhorn. The organization’s legal independence gave it special standing. By falling in line with dozens of other left-leaning advocacy groups, the ACLU risks diminishing its focus on civil liberties litigation and abandoning its reputation for being above partisanship

One issue in particular is the ACLU's interpretation of the second amendment, which they do not fight for with the same fervor as the first.

1.https://www.realclearpolitics.com/articles/2018/02/08/the_ac...


Their lack of support for the second amendment is the only reason I don’t support them. It proves that they’re politically slanted and it truly for my civil rights. Unfortunately the NRA has drifted from their core issue to bashing non-conservative candidates. I also can’t support them.


So does it imply that Amazon are not ready yet, so instead of saying "wait wait for one more year (please?)" they try to make a publicity stunt pretending to be good citizens?


> ... the software incorrectly matched 28 members of Congress, identifying them as other people who have been arrested for a crime.

Lol, that's because it's Amazon AI. Do you expect better from the makers of Alexa?


It's not like Amazon is bad at AI. The problem they're solving for is legitimately hard, which is why it's so frustrating that it's been put in production in a way that can really harm people.


If I'm not mistaken there's a configurable detection threshold and/or a match score.

But most likely people and organizations will think this work like the movies.

I wonder what would be the score of the actual faces if we added them to the test set (faces, but not the same photos). Would bigger test sets have photos that match the targets more than themselves?


> Nearly 40 percent of Rekognition’s false matches in our test were of people of color, even though they make up only 20 percent of Congress.

What percentage of the arrest photos were people of color? Was it significantly more or less than the 20 percent people of color in congress or about the same?


"Using Rekognition, we built a face database and search tool using 25,000 publicly available arrest photos. Then we searched that database against public photos of every current member of the House and Senate. We used the default match settings that Amazon sets for Rekognition."

Do you understand what a badly designed experiment this is?


Depending on the outcome you desire, this could be a perfectly designed activity.


Sounds like Amazon will use the year to improve their poor performing software to me.


Step 2: End Ring contracts with police departments.

https://www.eff.org/deeplinks/2019/08/five-concerns-about-am...


AWS is running a nice screen here. I recall reading about Rekognition being documented as having accuracy problems when individuals in question had darker skin [2,3].

>> "The latest cause for concern is a study published this week by the MIT Media Lab, which found that Rekognition performed worse when identifying an individual’s gender if they were female or darker-skinned." [1]

I can't really comment. Just recalled this in the memory banks and thought they might address this directly [they may have].

1 - https://www.theverge.com/2019/1/25/18197137/amazon-rekogniti...

2- https://www.media.mit.edu/articles/amazon-is-pushing-facial-...

3- https://www.marketwatch.com/story/ai-experts-take-on-amazon-...


There was a discussion on HN last week that I can't find where I was enlightened to find out that one of the big problems around doing facial recognition is proper lighting and without it you can't really build good models and be able to really use the image.

As an extension of this photographs of individuals with darker skin required more lighting than photographs of individuals with darker skin.

I don't know all of what goes into the ML for facial recognition and I am sure there are people far smarter than me working on it (and making way more money than me to boot), but I guess my thought here is that some variation of Poe's Law applies. I know that people are quick to jump to condemn something as racist but sometimes there really are just honest mistakes.

I have a hard time believing that anyone at any level of the AWS structure set out to produce a racist facial recognition, but rather it may have just been an honest over site and rather than rushing to crucify them we should instead look at it as a learning opportunity to help develop further the field of facial recognition.

EDIT: I wanted to clarify that although I don't think it was done purposefully I don't think it doesn't bespeak a problem; my intention was rather to serve as a suggestion that we should sometimes temper the often strong reaction produced when labeling something racist, and focus our efforts on identifying and solving the issue rather than trying to act punitive. To forestall objections I do recognize this is an issue that does require correction, and that it does bespeak a larger societal problem that has real consequences for real people ever day, but in my experience we will get more progress by attempting to work together in a spirit of cooperation rather than a spirit of anger and vengance.


If you lead a team building a facial recognition project, and you either elect not to run/look at experiments on skin color, or you ignore your findings on skin color, and decide to sell that product to people with the express and upfront intention of using it to target the use of government-sanctioned force against individuals, there's no such thing as an "honest oversight." You have actively made a decision to endanger lives. Hanlon's Razor (which I think you mean rather than Poe's Law) doesn't apply when you know you're giving violent people a force multiplier.


The technological implementation is almost certainly an honest mistake but the marketing and sales around it is not. We've known about the importance of lighting in photography for more than a century and the computer vision industry has been battling the issue for decades in manufacturing because metal is shiny. Kodak even used to manufacture different films to better represent non-white people and camera sensor manufacturers today have entire teams with a swarm of diverse models dedicated to accurate color reproduction of skin tones across races. It's been an ongoing issue for people of color ever since TV became a thing because those technologies were initially very poor at color reproduction as well, but that didn't stop newscasters from jumping face first into televised manhunts.

We've been around this block several times before and while the quarry may change from photo accuracy to ML driven facial recognition, the hunt does not. There's no excuse for selling technology to industries facing (or creating) life and death situations when the bugs are so obvious.


This is EXACTLY what people are trying to raise awareness of. This is implicit bias. It doesn't matter if people have "good intentions" or made "honest mistakes" if the tool is implicitly biased. This is why having a diverse team working on a project is important, because it will help call out these issues early rather than after they have been put in production.

It's also not purely a technical problem, if you feed the model only pictures of white and asian male college students then it's no surprise when you get a model that biases towards recognizing white and asian male college students (which is exactly how several prominent models were trained).


Biased algorithms/models are particularly dangerous because they tend to provide a veneer of objectivity (plausible deniability if you want to put a more cynical lens on it) that could frustrate attempts to hold users accountable.


Agreed. A huge problem I haven't been able to think of is the (already happening in masse) practice of targeting black people because the data says so. However over time police Dept invests data capture resources in "bad areas" - which may have a lot of black residents. I wonder how to separate the defense that some black areas are obviously bad and why would a dept put surveillance resources in a good neighborhood?

I personally feel it's wrong but that's one thing I've always got hung up on in building a critique.


Mathematician Cathy O'Neil's book "Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy" is a good introduction to the implicit biases in machine learning:

https://en.wikipedia.org/wiki/Weapons_of_Math_Destruction


That book is incredibly misleading.

This has nothing to do with machine learning. It is a simple correlational situation.

If African Americans have, on average, poorer credit ratings, then correlational models will begin to equate race with poor credit ratings, which will impact their ability to get credit and hence feeding back on that mechanism.

...of course RACE isn't allowed to be factored into financial applications, so the applications will often use other data points, like zip code, that end up having a correlation to bad credit as well as race. ...often producing the same result.

The problem isn't with the models - it's with reality.

The author famously said "Math is Racist". It's hard to get over such stupidity.


I don't think you're making the case you think you're making.


It seems like they are making the case that if reality is biased then models of reality will retain that bias.

If so they seem to make the point well.

If the only information you have about a loan applicant is where they live, your decision will be 'biased' if the location of where someone lives is correlated with other factors (as opposed to, say, the fact they live on a flood plain means don't give them a loan).

In this context, saying "Math is Racist" is like saying "Physics hates Fat People" because gravity disproportionately affects heavier people. Accurately reporting what is happening is not biased, making decisions without considering [edit: or not making a decision because you didn't consider] the context is biased.

Maths is a tool (well, collection of tools), and the onus is first on the tool user to use it in a fair way. Yes it is important for educators and tool creators to be mindful of how these tools will be used in practice, but there is a big jump from that idea to "Math is Racist".


Isn’t this a similar argument that could be made against race and gender based affirmative action? I don’t understand how organizations like ACLU are critical of face recognition tech because it reenforces implicit bias that engineers have but then turns around and supports race and gender based affirmative action that similarly reenforces implicit bias where PoC (but not Asians for some reasons) and non-males are presumed to be disadvantaged purely due to their identity.


I'm not sure what argument you are referring to here (if it was one above).

I think these organisations are criticising the tool builders for creating tools that are easily misused (or are created with unreasonable limitations, like only being valid for university students at one university, but are sold as widely applicable).

Supporting affirmative action initiatives like you list is trying to address the biases that exist in reality. I think this is often a bit backward (not addressing the root cause) but it can be expensive (in time, effort, money, politics) to address the actual root cause so these programs aim to address the bias at the place in manifests.

This is a similar (dare I say pragmatic?) argument to "it would be cheaper and more effective to just give everyone a no-strings attached payment each month then to provide means-tested payments to those who need help".

Detrmining if these arguments are correct is a different thing altogether, and I have no idea if these programs are cheaper and more effective then dealing with the root problem, or if it's even possible to define and address the root problem in the first place!

The two things you contrast above are fundamentally different - one is criticising tools and tool builders, the other trying to address perceived biases in the world.


When you say "[non-white/non-males] are presumed to be disadvantaged", have you talked to or listened to black or female academics? I follow ~4 black academics on twitter, and each of them have contributed to the #BlackintheIvory topic. Their identity plays a huge role in how others treat them.

> but not Asians for some reasons

Asian people are distinct because so many of them have immigrated recently, and immigration requirements favor educated and well-off folks. That masks many issues because they should have better than average outcomes due to better than average education and skills.


That's why "racism" has been redefined. Because it makes it morally convenient in the quest to "undo" past injustice.

On a side note: welcome to the Twilight Zone.


> so the applications will often use other data points, like zip code, that end up having a correlation to bad credit as well as race. ...often producing the same result.

You realize this too is illegal right? The law doesn't say "you can't use race" - instead it says (paraphrased by the Brookings Institute): "Are people within a protected class being clearly treated differently than those of nonprotected classes, even after accounting for credit risk factors?"[1]

O'Neil points out that math is often used to obfuscate this (whether it be deliberately or not). This is a valid point, and one that people who think of math as a values neutral tool should consider.

I didn't love the book, but it's difficult to make the argument that she is stupid.

[1] https://www.brookings.edu/research/credit-denial-in-the-age-...


> As an extension of this photographs of individuals with darker skin required more lighting than photographs of individuals with darker skin.

This is plain physics, no? Things are darker or lighter depending on the amount of light they reflect.

> I know that people are quick to jump to condemn something as racist but sometimes there really are just honest mistakes.

There are far too many things at play here. In a fair and just society, this kind of issue (like the XBOX Kinetic issue) would be met with a "oops, forgot to account for individual light absorption variance". A fix would have been issue and that's that.

Now, the problem starts when you begin digging. Why was this problem not caught? Because QA teams didn't catch it. Why didn't they catch it? Because the team wasn't very diverse, so testing failed to catch the problem? Why wasn't the team diverse enough? ... and now I've entered a societal rabbit hole that's far too complex for this post.

> I have a hard time believing that anyone at any level of the AWS structure set out to produce a racist facial recognition

Yes, highly doubtful. No benefit, and major issues if caught. It is far more likely that the dataset itself was biased. Why was it biased? ... and there we go again.

> we should sometimes temper the often strong reaction produced when labeling something racist, and focus our efforts on identifying and solving the issue rather than trying to act punitive.

Agreed, in principle. In practice, this produces no results in a mostly racist society. Companies (and politicians) will listen to outrage, they won't listen to well articulated and well-reasoned comments.


>> As an extension of this photographs of individuals with darker skin required more lighting than photographs of individuals with darker skin.

>This is plain physics, no? Things are darker or lighter depending on the amount of light they reflect.

Not quite. I don't know all the technical terms, but apparently, photograph technology early on settled on some standards and made some design choices that made it easier to photograph white than black people. That set a long-term precedent and standard for how film should work that persisted for a while and even to the present.

>>>Until recently, due to a light-skin bias embedded in colour film stock emulsions and digital camera design, the rendering of non-Caucasian skin tones was highly deficient and required the development of compensatory practices and technology improvements to redress its shortcomings. Using the emblematic “Shirley” norm reference card as a central metaphor reflecting the changing state of race relations/aesthetics, this essay analytically traces the colour adjustment processes in the industries of visual representation and identifies some prototypical changes in the field...

https://www.cjc-online.ca/index.php/journal/article/view/219...

Lay article on the topic:

https://www.buzzfeednews.com/article/syreetamcfadden/teachin...


I think the main thing is that even if the technology were improved and it could be proven that the biases were so low as to be negligible or that it basically identifies people equally well, the opposition would be on the technology as a tool.

People want some privacy in public. A tech that can track or backtrack people's movements is kinda creepy in a few ways.


Yep the ACLU did a study about this as well: https://www.aclu.org/blog/privacy-technology/surveillance-te...


> We’ve advocated that governments should put in place stronger regulations to govern the ethical use of facial recognition technology, and in recent days, Congress appears ready to take on this challenge. We hope this one-year moratorium might give Congress enough time to implement appropriate rules, and we stand ready to help if requested.

That's a little weak. If they were serious, the moratorium would extend indefinitely, or until such rules were in place.

One year might just be long enough for the fervor to die down, so they don't take such a PR hit when they resume sales.


What about in the UK? At the BLM protest, the police rub around with their camcorder on the stick to justify kettling people for 4+ hours and squash people's will to protest. They require you to show your face or arrest you. All because they want to use Rekognition to cross reference everyone's face.


That is troubling. Do you have references for that?


https://twitter.com/alessadavison/status/1270430150254084096...

Plus corroborating anecdotes from people I've met at the protests.

I always make sure I'm out of the way when these monkeys start kettling protestors


Care to elaborate why it's troubling?


*run


[flagged]


Care to elaborate?


That it is good that the police are holding people attacking the police to account?

Why does that need an explanation?


I reject the implication that police will only use facial recognition on people who are attacking the police, or that the repercussions for being present at (not necessarily instigating, or using violence during) a violent altercation will be proportional.


Not OP - but this is really good policing in my view. If the data stays private and not used against people in an unfair way subsequently, then I see no reason for it to be seen as "worrying" as the other commenter said.

E.g. they can cross reference-timestamps and see which individuals were close to violent altercations, they can then build up a solid case for who to interview and investigate further. Or maybe an actual crime happens during the protests and they need to investigate further, etc.

Honestly, as a side note, I personally find it very worrying that I have to justify such policing to a tech crowd. We can have responsible use of facial recognition and data-gather / profile-building, and it need not be a privacy issue. Right now, we sit with a situation where violent/aggressive/illegal behavior is being allowed to transpire during chaotic protests and it slips through because of the chaos and sheer scope and size of the protests. There is no way that traditional "policing" can combat that, and I fear we're emboldening criminal elements to take advantage of peaceful protests because we're unnecessarily tying the hands of the police because of nebulous "privacy" concerns.

And yes, burning/looting is a criminal act and shouldn't be tolerated during protests.


You don't need to techsplain this in hackernews of all places, it's cringey. You must be a perpetual contrarian, it makes you special.

There is a direct line between police use of Rekognition and the pretence to squash the freedom to protest. But you knew that, you just purposely have your blinders on to protect your worldview. Enjoy your leather sandwich.


You broke the site guidelines extremely badly in this thread. We ban accounts that do that. You've posted some substantive comments in previous threads, though (as well as some unsubstantive ones—please don't do that). I'm going to give you the benefit of the doubt and not ban you, but if you'd please review https://news.ycombinator.com/newsguidelines.html and stick to the rules when posting here, we'd appreciate it.


Just saw this. Sorry I got heated that day. Didn't have any time for these type of comments. Going forward I'm just gonna ignore and move on. Or just say what I need to say within the confines of the rules. Thanks.


For a start police in the UK have been videoing crowds with Handy Cams and mounted specialist mobile CCTV for decades for manual review. They have proved invaluable in NI for example in securing convictions of violent persons.

No one is saying there isn't a right to protest. The police have a duty to enforce the law and when protests turn violent they have the support of the majority to enforce that.

This method means minimal engagement with the crowd.


[flagged]


There was widespread violence and damage of public space. That you were outplayed by the police in exercising your little tantrum, it has no bearing on a discussion about police using video technology effectively to make policing safer and making finding culprits more reliable.


The same logic, excuse and thinking was used to mince Khashoggi. Just stop. You're embarrassing.


> The same logic, excuse and thinking was used to mince Khashoggi. Just stop. You're embarrassing.

You are a cartoon


It's not ok to break the site guidelines even if another account was behaving badly. That only makes this place even worse and contributes to its self-destruction. I know it's not always easy, but if you'd please not do that, we'd appreciate it.

https://news.ycombinator.com/newsguidelines.html


What did they say?


If you want to see dead posts, go into your settings and set showdead to Yes.


Ah thanks very helpful


This reads to me as simply “oops bad timing this year, let’s try again next year”. Yeah, sure, whatever Evilcorp.


I believe that you that's a fairly accurate assessment. More so, the whole thing is kind of a red herring.

Facial recognition technology is, after all is said and done, probably illegal in any country that implements the protection of basic human rights into its national laws. If countries (including the USA) do not, it says enough of its own about them. Nor has the idea of (national) exceptionalism ever produced a more equal and/and fair society. AFAIK, not a single in all history.

For those who like to argue that such technology could be legalized when people (collectively) agree with its use through political consensus (aka "the people", through politicians, voted for it), there are good reasons for why basic human rights are defined as "inalienable". Regretfully, many countries have nonetheless ignored that fact, whenever it suited the personal interests of politicians and those that stand behind them in the shadows.

“Fascism should more appropriately be called Corporatism because it is a merger of state and corporate power” ― Benito Mussolini


Wonder if that extends to the Bodycam analysis services running under a different brand, which allows searching for people based on various criteria and matching against a watch list?

https://www.ibm.com/support/knowledgecenter/SS88XH_2.0.0/iva...


Wasn't yesterday's facial recognition news all about how IBM weren't making enough money ... oh, hang on <check notes> decided to take a moral stance against law enforcement use of facial recognition?

(while cynically trying to link an organisation who built themselves providing IT services to nazi genocide, with the ethical side of the current police brutality protests... :sigh: )


built themselves providing IT services to nazi genocide

Seriously?


They did make many sales to the Nazis, of computers as well as whatever software stuff that went along with that, including stuff designed to support their Jew/+ extermination operations. I recall it from an article (on HN?) some time ago about U.S. companies that helped the Nazis. But there's tons of stuff about it on Google.


of computers as well as whatever software stuff that went along with that

Well, that is categorically wrong, given the history of computing.

But anyway, the implication in the previous comment was that IBM were built on the business of exterminating Jews, which is frankly ridiculous given that they started business 30 years before the Nazi party even came to power.

Note, I'm not claiming IBM didn't get involve with the nazis. The German subsidiary certainly did business with the Nazis, including with their processing of Jews and minorities. Thomas J Watson even received an award from them. But, IIRC, he realised he'd been set up as a publicity stunt and gave it back. Once the war started, the German subsidiary bought themselves out and became independent.

It should be noted that IBM in the US has a history of introducing policies ensuring equality and diversity in employment that precede similar federal legislation, sometimes by decades.

And yet it's only the Nazis thing everyone brings up.


Has anyone ever here actually demoed Rekognition? I did two years ago, maybe.

From that, I felt like it doesn't work and shouldn't be used in production, never mind police production.


Yes, we ran it against our borrower image set and the results were atrocious, those this was now a couple years past. Never tried again after that.


> we ran it against our borrower image set

Why?


It was experimental for trying to help identify repeat borrowers, that way we could link to their previous loan.

Separately, we experimented with various vendors "face detection" (not whose face, but rather just "is there a face") just to see how many faces appeared in a photo, because for group loans you needed at least 75% of the borrowers present in the photo. If this didn't happen then it meant the loan didn't get posted and someone would have to go back and get all the borrowers together again for another photo which is laborious and inefficient. Much better if you could give the feedback upfront. Granted, as I noted in another comment, at the time all the major vendor's tools had abysmal accuracy and we abandoned the effort.


Interesting, thanks!


I see this as an extension of the Facebook <moderating/censoring> discussion, which is really a broader question of what moral obligations do corporations have beyond following the law and trying to provide the optimal product to their consumers?

Also there seemed to be no substantive discussion prior this about the police using Rekognition until it became a hot button issue. What will the widespread effects be if corporations start allowing their decisions to be governed by <outrage of the mob/principled consumer pressure>?

Finally I wonder how they will implement this, I mean after all I can sign up and start using any AWS service with just a credit card what's to stop police departments from simply using a corporate card and signing up for a different account? Also does this apply to just local PDs or does it extend to the FBI, NSA, CIA, or other 3 letter government agencies?

Disclaimer: The purpose of these comments are intended to be observational not advocational.


> what's to stop police departments from simply using a corporate card and signing up for a different account?

Surely there's better oversight in police management to prevent that?

Well, you know, except in Australia where the cops lied to the public about using clearview:

https://www.abc.net.au/news/science/2020-04-14/clearview-ai-...

Or New Zealand:

https://www.rnz.co.nz/news/national/416913/police-stocktake-...

Or the UK:

https://globalnews.ca/news/6969069/london-police-clearview-a...

And I'm sure all the 600+ US law enforcement agencies here went through proper approval channels and oversight:

https://www.nytimes.com/2020/01/18/technology/clearview-priv...

:sigh:


> there seemed to be no substantive discussion

Just because you don't know about it doesn't mean it doesn't exist.

Try google "aclu facial recognition", "eff facial recognition".

congress.gov returns 132 bills introduced in the last two sessions going back to 2017. If you read the titles, it's clear many of them are related to transparency and respecting rights to privacy.

https://www.congress.gov/search?searchResultViewType=expande...


I don't mean to imply that there wasn't discussion or advocacy around the issue but rather it was not an issue that was high in public awareness or concern.


I don't really see how this interpretation of what you wrote makes sense.

"Also there seemed to be no substantive discussion prior this about the police using Rekognition until it became a hot button issue."

If you didn't mean that it wasn't an issue that was high in public awareness or concern wouldn't it be tautological that it wasn't high in public awareness or concern until it was a hot button issue? Like, the definition of it becoming a hot button issue is that it's high in public awareness or concern.

Am I misinterpreting something? Did you mean something else and I just got it wrong?


I think it's an absurd assertion to say "there was no substantive discussion prior to this about police using Rekognition". Seems to me that people are now becoming aware of the problem with this, and Amazon has implemented a one-year (not permanent moratorium) in order to allow for more discussion and legislation around the issue. There are much bigger implications than the facebook issue if you are misidentifying criminals as a result of bad facial recognition, so you can't really equivocate the two.

Also, to call this "mob outrage" or "principled consumer pressure" is delegitimizing the entire thing. Do you really genuinely think this happens any other way? Seems like when a lot of people start to have a problem with something, it makes sense to have a moratorium and investigate improvements/solutions.


Similarly, I would argue that we don't want to be reliant on corporations to determine what the lines are themselves, as they should not be de-facto moral authorities. Determining what is acceptable needs to come from our society as a whole, through reasoned debate and proper functioning of government.


> through reasoned debate and proper functioning of government.

Yikes. Is there a fallback option?


The other options seem to be the benevolence of Bezos or Twitter condemnation, so not really.


Still better in some cases than relying in government taking action :-(

Note that this is a one way relationship, corporations must comply with laws, but can also do other things.


I'd say "better" as in "more likely", but I'm still not a fan of a handful of SV folks being the moral decision makers for the world as the general order of things.

I'll gladly accept additional benevolence from them though! Just not as the sole power in the area.


> corporations must comply with laws

Sadly, global corporations seem to be mostly able to choose which laws they want to comply with by shifting jurisdictions at will... "Oh no, for _tax purposes_ we're an Irish company! For privacy purposes we're European. For Intellectual Property purposes we're a Delaware C Corp. And, ummm, that department that doesn't exist is officially deputised by the Saudi Royal Family."


Ultimately, the French option from the late 1700s...


It'd be kinda nice if we trusted the police to act as "moral authorities" instead of trampling the public's rights and privacy (and necks) with wilful abandon whenever they feel like it...


We can only get to that point via systemic processes to ensure the proper people end up in the positions of power, with proper accountability acting in the limits of a shared morality

Hoping that police/tech companies/miltiary/etc are moral isn't an actionable plan.


I find that I don't shoot random innocent people mostly because I'm not a homicidal maniac but also because I will probably lose my house and everything I've worked for my whole life possibly including my freedom.

People in the armed forces need to be put in the same boat I'm in.


> Finally I wonder how they will implement this, I mean after all I can sign up and start using any AWS service with just a credit card what's to stop police departments from simply using a corporate card and signing up for a different account?

Maybe they could work some kind of penalty into the contract? Something like "if you're working on behalf of a police department, you are forbidden from using our facial recognition services. If you sign up despite this term, we will cancel your account and retroactively bill you $100,000 or your usage at a rate 1000x normal, whichever is greater."


>what moral obligations do corporations have beyond following the law and trying to provide the optimal product to their consumers?

This question just does not work when you consider how much companies spend lobbying.


I don't think there is any obligation to do more than follow the law. I believe Amazon is here trying to go one step above and do what it believes is right. Clearly corporations are allowed to put more stringent requirements on itself if it believes that is right.


>what moral obligations do corporations have beyond following the law and trying to provide the optimal product to their consumers?

Corporations have no such morals. They are profit seeking social constructs. Breaking the law is often a profitable cost of doing business, as is making an ever increasingly shitty product when there is little to no competition.


Moral obligations, almost by definition, do not exist. That goes for anyone, not just corporations. They are the things you should do even though there is no reason to do them.

Arguing corporations have no morals is being pedantic. The question is clearly, what moral obligations _should_ they have?


>Moral obligations, almost by definition, do not exist

...

>The question is clearly, what moral obligations _should_ they have?

I'm confused.

If moral obligations do not exist as you claim, why should we pretend they should exist?


Because it's in the common interest. The social contract doesn't "exist". It's an abstraction that people have agreed upon, in many forms, because it benefits the whole. What things we codify end up as government regulation. What things we don't are moral obligations.


Well, considering that corporations are people now, I would say they have all the same moral obligation as people.


It's probably the legal and PR teams' fault, but this surely could have been worded to sound less like potential corporate doublespeak:

"We’ve advocated that governments should put in place stronger regulations to govern the ethical use of facial recognition technology, and in recent days, Congress appears ready to take on this challenge. We hope this one-year moratorium might give Congress enough time to implement appropriate rules, and we stand ready to help if requested."

Dead giveaway is that Legal and PR teams relentlessly edit out self-agency.


I remember at the AWS summit maybe two years ago, they were casually showcasing how some police depts were using Rekognition. Oh my, what a culture shock. How can you basically foreshadow 1984 on stage without blinking an eye?


"We’ve advocated that governments should put in place stronger regulations to govern the ethical use of facial recognition technology"

Or you know... you could do it yourself. Ethics don't have to come from regulations.


That's exactly what they're doing?

But Amazon can only control its own offerings. It can't control what any other company that offers facial recognition does, and they probably know that as soon as AWS steps back, some other AI company with less ethics (or cynically, less care of public pushback) will swoop in without hesitation. The only way to stop that is regulations.


They've had plenty of opportunities to do the right thing and they actively worked against it. In the last year people were so upset about Rekognition that they organized a shareholder proposal (it was defeated)[0]

While it's good that they are doing the moratorium, I think it's hardly applause worthy for them to have needed this much backlash to act.

Also, I'm not impressed by the argument that other people might offer face recognition. This is about Amazon's actions. https://techcrunch.com/2019/05/28/amazon-facial-recognition-...


They're hamstrung by their shareholders. This is why it's impossible to find a public company that cares about ethics in any context other than potential blowback.

Amazon can keep the sharks at bay for a year and cry for help, but if regulation is too late, they're going to be eaten for lunch by shareholders and they know this.


Despite what many people believe, shareholders don’t actually work like that. They care about making money, but no one is going to sell Amazon because they’re not continuing a single low profit line item. No shareholder is condemning Apple for not making Rekognition, and Amazon wouldn’t be killed for dropping it.


> Despite what many people believe, shareholders don’t actually work like that.

Shareholders do determine whether a company does something ethical or something profitable. And by the numbers, they choose profit unless it would cause public outcry (and sometimes despite that).

Last year, only 2% of Amazon stockholders voted to ban the sale of facial recognition software to the government, and only 28% even wanted a report on possible threats to civil liberties.

https://www.geekwire.com/2019/amazon-shareholders-proposals-...

> They care about making money, but no one is going to sell Amazon because they’re not continuing a single low profit line item.

This is true, but I don't understand how it's relevant.

> No shareholder is condemning Apple for not making Rekognition

Nobody is condemning Apple because it would be more of a risk for them to develop it, since they would have to do a larger pivot from their current core product. Amazon in contrast is already in the business of selling cloud services, so it's a product with a straightforward path to profitability.

> and Amazon wouldn’t be killed for dropping it.

My point is that Amazon won't drop it in the long term. The 1-year moratorium is to cover their butt until they can figure out how to sell the technology to the police without becoming the scapegoat for the recently news-blasted civil liberties movements. If I were in their position I'd do the same.


If Jeff Bezos decided tomorrow that Amazon would never sell any face recognition software, that’d be that. The only recourse from shareholders would be to try to replace him as CEO, which isn’t going to happen over this issue. I agree that shareholders aren’t going to proactively choose to stop it, but they generally give companies a fairly large leeway for what decisions to make.


How does this work with vanguard etc?. Do they vote too?


What prevents a private company (ex. Clearview) from using Rekognition to accomplish the same thing for the police as a government-contractor?

Without any kinds of laws, wouldn’t things like this incentivize new niches to popup to milk money from the government?


Isn't it open to anyone with an AWS account? So how are they even trying to implement this and what would be stopping any third party from using this to submit reports to law enforcement?


If you're spending enough on AWS, the interaction isn't completely faceless. You'll usually get an account manager assigned to you and they might suspect something is up if they have a lot of mug shots coming through.

If it's a third party with a new name unrelated to law enforcement, that complicates the chain of custody and probably wouldn't be worth it to any agencies to set it up to skirt a 1 year moratorium, even if someone at the agency thought it was a good idea to try and flout Amazon's policies (they definitely won't, agencies just don't move that quickly).

That being said, there's no way to guarantee that it won't be used, but it would be difficult for LAPD to be running at scale with nobody raising any flags.


>they might suspect something is up if they have a lot of mug shots coming through

Can AWS see the images used with Rekognition?


https://techcrunch.com/2020/06/10/amazon-rekognition-morator...

> Amazon is known to have pitched its facial recognition technology, Rekognition, to federal agencies, like Immigration and Customs Enforcement. Last year, Amazon’s cloud chief Andy Jassy said in an interview the company would provide Rekognition to “any” government department.

> Amazon spokesperson Kristin Brown declined to comment further or say if the moratorium applies to federal law enforcement.


AKA 'We're going to sell it to police but only after people stop looking'


What is the difference between Rekognition and Clearview AI? I'm assuming that Rekognition is just using government photo databases rather than social media?

It seems that Amazon has a far better reputation on HN compared to Clearview AI. Is that deserved?


Rekognition is just their normal CV offering you can use for anything from what I know.


Anyone with a modicum of skill and a few GPUs can do what Rekognition does using code freely available on GitHub and public datasets. This cat is _way_ out of the bag.


It's always a slippery slope when companies morally compel themselves to block use cases from their service.

I welcome this move from Amazon, but I hope it doesn't foreshadow more moral bans in future e.g. spurred on by the next angry mobs who will try to limit free speech in society.


I hope unpolished technologies that put already marginalized groups in more potentially deadly and unecessary interactions with the police get binned.

If anyone has a hard time empathizing here, imagine your kids in the false positive person's shoes.


So i have a schitzo view on subjects like this;

1. I am against places that say “photography prohibited - private property” — if i can see it, i should be free to photograph it.

2. I am against ANY use of facial recognition ever anywhere. I own my face and i am allowed to keep it private if i choose to.

So,yes completely schitzo and i realize this.

But its not an evenly distributed spectrum of a problem. Its a weighted web of nuanced issues.

I just dont know how to balance it.

Id love to discuss this if anyone is open.


If you're showing your face everywhere, you aren't keeping it private.

You are in control of whether your face can be seen


What opportunities exist for use of facial recognition by protest groups and citizen watchdogs? Any products? Any success stories?


This reminds me of an FBI agent who likes to keep his gun in his back pocket and have it bulging out, just to impress people who knew what he did. I can't imagine how a few bad apples within the police force would sit behind a computer playing with Facebook profile pictures and matching them against Rekognition.


Does anyone know how exactly is the Police integration done with Rekognition? I mean, they must have it integrated into their IT systems, right? Who did that integration? AWS itself? Or some consulting companies? Or the PD's Tech Department?


Then won’t departments just go to the highly secretive companies like Palantir?


This is why the blog post includes a note on how they want it to be illegal. It'll still happen, but in order to keep it secret, they'll need to employ it less often.


There's a lot of talk about the technology - but this aside, why are police even using this?

We don't need this kind of hyper surveillance for common crime, people with warrants, it's just too much of an intrusion.

I can see this being used in certain places for 'high value individuals' such as those marked by the FBI (major crimes, multiple murders) or literally 'terrorists' - but for regular crime, I think it's way too much.

We can't be under constant surveillance by the police computers that's just no way to live.


Arguing in the other direction, it turned out to be very important that Floyd killing was captured on video and the killer identified.

Since that happened, there have been dozens (at least) of murders and vicious, life-changing assaults, most captured on video. I'd be very happy to see every one of the bad guys identified, and this seems like it would be effective toward that end.


Also check out 'Data for Black Lives' organization, they had been working closely with ACLU on this matter.


“Police use” == contractors providing subsequent services to said police and bilking tax payer money for said service, with likely companies founded by public servants’ significant others to do said billing (remember this actually happened in 2009 by wives of bankers setting up companies to get bailout funds)


"We are implementing a one-year moratorium on police use of Rekognition" ... until this whole thing blows over. /s

They're leaving money on the table, but it will still be there in a year, and they'll only miss whatever Amazon's functional analog of "compound interest" is.


Yeah holy... So in a year from now the cops can resume evil behaviour?


My guess is that they don't view facial recognition as an inherently bad thing, but they do view it as bad when wielded by bad actors (such as the current police environment).

The optimistic view is that this moratorium is to see if police departments truly do reform themselves over the next year to the point where they can be trusted to use facial recognition again. I hope a reevaluation takes place then.


Presumably the expect the election to change things.


I wonder if this means that existing customers/products had to stop using the service? If so this might be the first time I recall seeing a cloud vendor flex like this.

CTOs of city and law enforcement orgs are probably seriously questioning the vulnerability of relying on cloud SaaS.


Super funny thought:

The reason why super heroes wear masks and capes is to avoid facial and gait recognition cameras!


Why do they say Congress is ready to take on this challenge? Congress hasn't passed a thing yet.


"ready to take on" sounds more like the beginning of the funnel.

"hasn't passed a thing" sounds like the end of a Congressional funnel.

edit: this just purely a retort about the specific complaint of the parent. I don't deny that Congress hasn't actually done much useful to forward the policy changes I would deem desirable here.


Does the same apply to users from mainland China or companies like Zoom? If not, why not?


China got their fair amount of Computer Vision companies anyway. Sensetime, yitu, megvii, cloudwalk and so on. I even heard one using Gait analysis but I forgot the name.


Can someone explain what has kicked off this retraction from face recognition from IBM and now Amazon? I mean it’s always had dubious uses. What has made this happen right now?


Probably that everybody is using masks so the system lacks of essential parts and does not work anymore until is rebuilt and fixed. There could be also a risk of database being tainted by masks with something print over it, like lips, wrinkles in the mask, or even the face of somebody.

And of course you can not require citizens to show the face and to cover the face at the same time, so to try to publicly denounce people with the face covered as delinquent wannabe is not possible at this moment.

But I'm just speculating.


You have a point there. I can see people continuing to wear masks even after the order is lifted across cities.


Taking in mind the US healthcare system, americans should have a bigger incentive to wear masks than south koreans, if only for protecting themselves and their families from hospital bills.


Opinion of police departments is falling so this could quite plausibly be the next big battleground.


It's perfectly acceptable to cash in on using face recognition for profit but dubious when used to lock people behind bars sarcasm


I can see how this decision would stop immediate problems following recent events. However, in the long-term, wouldn't machine learning help regulate and fix human bias?


Better than nothing I guess but still not a perfect solution.


> Better than nothing I guess but still not a perfect solution.


Are they still sharing Ring doorbell videos with police?


Is this a moratorium on sales, or are they shutting off the service of cops using this? Will this apply to feds and US military, as well?


As we all know, every moral issue has a 1-year expiration date.


Every moral issue is also black-and-white and requires no practical (or ethical) implementation plan.


The cynic might say “because in a year this will all have blown over and we can get back to selling”.


You don't really need facial recognition once you have enough data from contact tracing.


Ahh, one more year until I finish creating my fake digital self and delete fingerprints.


I call this profiteering on dilemma. Disgusting in my opinion. Just my opinion.


"We are waiting until the current news cycle has blown over"


Not Enough. Period.

Yet another PR move to placate rather than address the problem.


Translation: We already made a bundle on this, but we're seeing too much pushback, so we're getting out before the downside eats into our profits.


It's reactions like this that make companies not want to even try. "People are going to bitch whether we sell to the police or not so there is no upside to stopping". Why not have a little positivity that AWS is finally restricting access to their technology beyond what is legally required?


We should be positive about public pressure having an effect on large corporations. We should not be positive about a large corporation making the calculated decision that a tiny drop in revenue is worth the advertisement and good press that comes with it.

What percent of AWS revenue is from Rekognition? Probably a rounding error.


because there's generally much cynicism about piecemeal measures in regard to surveillance. Over the past, it's been exceedingly common that both private firms, as well as legislators, halt some program or legislation only to bring it back a year or two later.

Arguably we need a much more principled, stronger stance on opposition to mass surveillance period. Companies that understand the ethical obligations should get out of it completely.


Go all in then. Disallow the use of the technology without independent oversight. But don't do a half hearted measure because you're unwilling to commit, or you're going to grasp for some positive PR.

> Why not have a little positivity that AWS is finally restricting access to their technology beyond what is legally required?

Because it's not enough. "Please sir, may I have some more" is not how you address the weaponization of technology by a trillion dollar org.


Why do we need to demand all or nothing? AWS is never going to allow for independent oversight - not even Google does that. Demanding this just means that AWS would never make any improvements.


Independent oversight? So .. like from a government? Cause that's the definition of independent, from the people for the people. And if the next answer is "that's not how government is in reality" neither is any other 'independent' oversight. Humans are easily corruptible. We better find a solution for this problem or all that will happen is an endless line of watchers watching other watchers.


Humans are the only solution. Rational humans are never going to rely on technology alone for enforcement, governance, and/or oversight. If you have a problem with the humans currently making decisions, find better humans. Checks and balances.

Don't like the Big Tech corporate surveillance state? Write better laws regulating them. Don't like the people writing laws currently? Vote and run against them. Still not heard? There are yet more avenues for recourse.

The idea that technology is going to fix these problems holds no basis in reality.


Seems I misunderstood your previous post, cause everything you wrote I agree with completely.

My only point here is: Currently, people do not trust independent oversight (read: government). And there are probably a few good reasons. So, I don't see how saying "Amazon should only sell this technology with independent oversight" fixes anything as long as the trust problem isn't solved.


Definitely talking past each other. You must solve for X, where X=trust.


> Independent oversight? So .. like from a government? Cause that's the definition of independent, from the people for the people.

If you do truly hold the belief that governments are "the definition of independent", and that all governments act solely in the best interests of their own citizens...well, to borrow some words from Public Enemy: Can't do nuttin' for ya man.

We're literally talking about the government using facial recognition with no oversight and little public debate and no consent from the public, so you wouldn't have an excuse for coming out with this "government is all of us working together" lorem ipsum even if you were born 30 seconds before the article was posted and didn't know anything about the way that governments actually behave in practice.

You come so very close to getting it, in that you acknowledge that this isn't how governments work in reality, and then you're like "well neither is anything else though" as if that is somehow an argument for a strategy that you kinda-acknowledge cannot work.

And FWIW: I agree, it's fucked, it needs to stop, both in the public and private sector, and anyone that works on such tech should be shunned. But your assumptions are false to fact.


A couple months ago, Amazon execs were reported to be conspiring to smear a black labor organizer with racist dog whistles.

https://www.vice.com/en_us/article/5dm8bx/leaked-amazon-memo...

Treating their motives with anything but the utmost cynicism seems the rational move here.


Bottomless cynicism gets us nowhere. It's far from done, but unless there's some evidence that they are still selling Rekognition to the police, this is a positive development in the field of facial recognition.


Or getting out while black lives matter and will get back in when everyone has forgotten again.


It is a one year moratorium. They aren't even getting out for good. They are pausing it, hoping this issue will be forgotten about it a year, and then they will resume making money off it.


> We already made a bundle on this

Closing a profit generating business line is all more difficult than closing an unproven one. Props to folks at Amazon, this a change for the better.

I'm wary about their competitors though. Looks like an opportunity for Microsoft to monopolise the face recognition software market. Can't think of any market force to solve this, and regulation would probably leave a lot of room to game the system.


This is an important step


No, this is more likely simple opportunism. They want to supply the police of course, just not yet when public opinion is against police.


Keep it up


Amazing news!


So... cloud service providers now have the right to determine what services they want to allow and what they want to shut off? HAL 9000: "I'm sorry Dave, I'm afraid I can't do that"


No, they have always had that right, as does any other private business. You can always refuse service if it is not on discriminatory grounds (gender, sexuality, ethnicity, religion or disability).


Sexuality is only a protected class in some states, and sexuality isn't a protected class at the federal level.

Unfortunately, there are plenty of places in the US where one can experience legal discrimination based on their sexuality in housing, employment, education, health insurance and healthcare. There are several maps here[1] that show where those places are. Many of them don't classify violence motivated by the victims' sexuality as hate crimes, either. There's also a table here[1] that summarizes the discrepancies in the US.

[1] https://en.wikipedia.org/wiki/LGBT_rights_in_the_United_Stat...

[2] https://en.wikipedia.org/wiki/LGBT_rights_in_the_United_Stat...


What do you mean "now?" They've always had the power to do that.


Time for some cloud neutrality.


Pretty soon you will see telephone companies refusing service, car companies refusing to service cars etc. based on a moral judgement of the customer. Then whey will finish the process of getting anybody accused of wrong-think purged from employment etc.. Hmmmm, where is history was that tried before? What could possibly go wrong?


This is already the case today under capitalism. A business may refuse service to anyone provided they are not a member of a protected class (the Civil Rights act and the ADA define a few).


It’s more disturbingly the case under whatever economy China runs. https://foreignpolicy.com/2018/04/03/life-inside-chinas-soci...


China's economy is a form of capitalism called state capitalism.


Gofundme.com


You mean like baking cakes? Conservatives can't have it both ways.


Yeah, they wanna have their cake and eat it too.


Personally I am pro facial recognition being used, and believe it is very necessary if police budgets are cut and patrols are reduced. We need a way to hold criminals accountable and bring them to justice. Cameras with facial recognition let us identify their location and send police officers to apprehend criminals, instead of relying on the random chance that an officer spots someone while driving around and matches their face with a list of perps they've seen before.

I haven't heard of any police departments using facial recognition as a definite match. All of them use human confirmation. So basically, the number of false positives does not matter - it's more that facial recognition reduces the total amount of data to a more manageable number that are scrutinized by human eyes.

I don't know why people would be against this. Steps like this moratorium just seem like posturing or an overreaction. The recent policing incidents that have been in the news do not involve facial recognition and there is no reason to tie them in.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: