Hacker News new | past | comments | ask | show | jobs | submit login

It kind of does though, because it means you can never trust the output to be correct. The error is a much bigger deal than it being correct in a specific case.



You can never trust the outputs of humans to be correct but we find ways of verifying and correcting mistakes. The same extra layer is needed for LLMs.


> It kind of does though, because it means you can never trust the output to be correct.

Maybe some HN commenters will finally learn the value of uncertainty then.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: