Hacker News new | past | comments | ask | show | jobs | submit login

Racism isn't an objective order existing in the universe separate from us. It's part of human experience and exists where humans experience it.

Given the recent history of equating black people with non-human primates, and using that to deny them rights & full participation in society, making this error is going to be experienced as racist. It's not a matter of individual malice or taxonomic classification, but of history and social relations.




I think we can all agree that the classifier is horribly broken.

But it seems like if nobody is working on this, how will we ever fix this gaping hole in image classifiers? And don't we want to fix it? And to fix it, research will continue to get it wrong until they get it less wrong and more right, but can only iterate without a massive backlash. It seems like being stuck between a rock and a hard place.

I am rhetorically asking, wouldn't we have to allow researchers to iterate on this problem to fix it? That simply won't happen until we are able to allow them leeway understanding that this is an incrementally improving model. Otherwise what we have is just a sledgehammer solution (just banning all primate classifications) which actually never addressed the problem, that these models do have a race-based bias (probably in their input datasets.)


I'm simply answering the question of how it is racist, not currently trying to tackle the appropriateness of fixing the racism or the technical hurdles involved in that. It's outside my expertise and not particularly relevant to the comment I was responding to.


I suppose this could be an example of Popper’s third world.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: