Hacker News new | past | comments | ask | show | jobs | submit login

I would expect just as many hallucinations as "normal" from an OpenAI API endpoint. I agree that knowing the difference between facts and formatted content is good media literacy at any age.



ChatGPT/Open AI's API is such a good liar I have to fact check it externally on matters specific to my field of work for the last decade. I can't imagine being a kid expected to do the same with general knowledge provided by it and having no other sources made available to me while I use it.

That's not to say AI is wholly bad because of hallucinations but "just swag a guess based on your personal BS detector" is an unrealistic expectation for its use.


Do you really expect a 10 year old to consistently be able to tell when an LLM is feeding them convincing-sounding but ultimately false information?


Ultimately it doesn't matter and that battle is already lost, even adults overly trust the output and never question it. I've had colleagues insist to my face that something wasn't possible in a piece of software I'm an expert in because ChatGPT told them it wasn't possible.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: