Hacker News new | past | comments | ask | show | jobs | submit login

"We joke about LLMs hallucinating but I'm not convinced we are so superior when we are outside our personal "training data"."

In all seriousness one of the things about LLMs that most impress me is how close they get to human-style hallucination of facts. Previous generations of things were often egregiously and obviously wrong. Modern LLMs are much more plausible.

It's also why they are correspondingly more dangerous in a lot of ways, but it really is a legitimate advance in the field.

I observe that when humans fix this problem, we do not fix it by massive hypertrophy of our language centers, which is the rough equivalent of "just make the LLM bigger and hope it becomes accurate". We do other things. I await some AI equivalent of those "other things" with interest; I think that generation of AI will actually be capable of most of the things we are foolishly trying to press hypertrophied language centers into doing today.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: