Hacker News new | past | comments | ask | show | jobs | submit login

This isn't about being fair to the algorithm, it's about being fair to the people subject to the algorithm's judgement



Which is a distorted discussion already. Instead of questioning the benefit of the technology for policing on the whole it is already shifted to the issues of bias. In the end the algorithm and the data will of course have bias in some form but that isn't even important at that point.

Yay, we have an algorithm judging people, but is it fair to Canadians? Completely off...


This isn't distorting the discussion at all. From the point of view of the ACLU, new technology should only be adopted by policing if it can be proven to not perpetuate or exacerbate existing problems of bias, which IMO is a totally reasonable position. We don't accept the "move fast, break things" ethos in fields like aviation, for example, and if we only listened to technologists, society would be too cavalier about collateral damage affecting innocent people. One could argue that society is already too cavalier about this.


I disagree because you already anticipate large scale deployment with that argument.


Facial recognition AI products were already being sold to police departments and customs agencies. Large scale deployment is in the process of happening. Maybe not at your specific agency, but the ACLU is a national organization, and in their eyes one additional false positive caused by increased efficiency is one too many.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: