Hacker News new | past | comments | ask | show | jobs | submit login

I don't think that an AI would be interrogated in court.

I think that it would be hard to hide all the inputs and outputs of the AI from scrutiny by the court and then have the company or senior leadership be held accountable for them.

Even if you had a retention policy for the inputs and outputs, the AI would be made available to the plaintiff and the company would be asked to provide inputs that produce the observed actions of the AI. If they can't do that without telling the AI to do illegal things, it would probably result in a negative finding.

----

Having thought a bit more, I think the model that we'd actually see in practice at first is that the AI assists management with certain tasks, and the tasks themselves are not morally charged.

So the manager might ask the AI to produce performance reviews for all employees, basing them on observable performance metrics, and additionally for each employee, come up with a rationale for both promoting them and dismissing them.

The morally dubious choices are then performed by a human, that reviews the AI output and collects or discards it as the situation requires.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: