Hacker News new | past | comments | ask | show | jobs | submit login

The software only flags people for manual review. It’s not an automatic software-only rejection.



If a selection process is screwed at the outset, I don't think one can generally un-screw it with any amount of post-hoc analysis or manual review.


Your comment seems unrelated to the comment you're responding to. That comment is about automatic rejection, not selection.

But speaking of selection, there is no perfect process in any system involving humans, so whether it's a flawed facial recognition algorithm, or a biased, tired, overworked human doing it, having a manual review step and post-hoc analysis is pretty useful.


My point is that by the time a biased selection happens, much of the damage to the overall process is already irreversible. If the algorithm is biased (and I'm not saying it is, just that it's valid to ask the question), making the final decision manual is more of a fig leaf than an actual fix.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: