Hacker News new | past | comments | ask | show | jobs | submit login

Hallucinations!

A generative tool can’t Hallucinate! It isn’t misperceiving its base reality and data.

Humans Hallucinate!

ARGH. At least it’s becoming easier to point this out, compared to when ChatGPT came out.




Great example. People will say, “oh that’s just how the word is used now,” but its misuse betrays a real lack of rigorous thought about the subject. And as you point out, it leads one to make false assumptions about the nature of the data’s origin.


If "hallucination" refers to mistaking the product of internal processes for external perception, then generative AI can only hallucinate, as all of its output comes from inference against internal, hard-coded statistical models with zero reconciliation against external reality.

Humans sometimes hallucinate, but still have direct sensory input against which to evaluate inferential conclusions against empirical observation in real time. So we can refine ideas, whatever their origin, against external criteria of correctness -- this is something LLMs totally lack.


Calculators compute; they have to compute reliably; humans are limited and can make computing mistakes.

We want reliable tools - they have to give reliable results; humans are limited and can be unreliable.

That is why we need the tools - humans are limited, we want tools that overcome human limitation.

I really do not see where you intended to go with your post.


Not the poster you’re replying to, but -

I took his point to mean that hallucinate is an inaccurate verb to describe the phenomenon of AI creating fake data, because the word hallucination implies something that is separate from the “real world.”

This term is thus not an accurate label, because that’s not how LLMs work. There is no distinction between “real” and “imagined” data to an LLM - it’s all just data. And so this metaphor is one that is misleading and inaccurate.


This. We have trained generations of people to trust computers to give correct answers. LLM peddlers use that trust to sell systems that provide unreliable answers.


I liken current LLMs to that one uncle that can answer and has an opinion on everything even though he has actually no knowledge of the thing in question. Never says "I don't know", and never learns anything for more than a few moments before forgetting.


Amen. But if you do, you'll still be attacked by apologists. Per the norm for any Internet forum... this one being a prime example.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: