Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Sometimes LLMs hallucinate or bullshit, sometimes they don't, sometimes humans hallucinate or bullshit, sometimes they don't. It's not like you can tell a human to stop being delusional on command either. I'm not really seeing the argument.


If a human hallucinates or bullshits in a way that harms you or your company you can take action against them

That's the difference. AI cannot be held responsible for hallucinations that cause harm, therefore it cannot be incentivized to avoid that behavior, therefore it cannot be trusted

Simple as that


The question wasn't can it be trusted, it was does it think.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: