Hacker News new | past | comments | ask | show | jobs | submit login

So.....,just like humans?



That's the point though. It's illegal for humans to do things during hiring that are not illegal for AI to do as it stands now (all sorts of discrimination). We want to at least level the playing field.


There’s no exemption from prosecution for racial bias because it was done by an algorithm. If a company uses such an algorithm, AI or not, they will be open to prosecution for it.


The differential nugget is that LLMs and similar tech are auditable. You can audit the models, test for bias etc in a way that scales.


how do you audit the loaded die at the end of all llms?


Are there agreed upon neutral auditors or common processes for auditing models? If not, maybe what we need is (gasp!) some governmental oversight.


Humans can be asked to explain their decissions and personally take accountability for them. Even if AI processes were less biased, using them introduces the opacity of relying on models that simply can't be fully intuited or explained. For life-altering decisions, merely performing better than a human is not the benchmark.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: