Hacker News new | past | comments | ask | show | jobs | submit login

> Bigger models hallucinate less.

I'm skeptical. Based on what research?




GPT-4 hallucinates a lot less than 3.5. Same with the Claude Models. This is from personal experience. There are also benchmarks (like TruthfulQA) that try to measure hallucinations that show the same thing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: