Hacker News new | past | comments | ask | show | jobs | submit login

You can never trust the outputs of humans to be correct but we find ways of verifying and correcting mistakes. The same extra layer is needed for LLMs.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: