Hacker News new | past | comments | ask | show | jobs | submit login

> The goal isn't so much to have a model that can't automate harm as it is to have one that won't provide authoritative-sounding but "bad" answers to people who might believe them.

We already know it will do this - which is part of why LLM output is banned on Stack Overflow.

None of the properties being argued about - intelligence, consciousness, volition etc. - are required for that outcome.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: