Hacker News new | past | comments | ask | show | jobs | submit login

I’ve seen gpt-3 do this in general, it’s quite interesting. It’ll quote things that sound right to the prompt and response with realistic references and names, with no actual results. These types of AI seem to be against admitting they don’t know.



Why would anyone expect a language model to admit that it "doesn't know" (unless explicitly asked to)? That's not what it's for. It's there to put together a string of words that's plausibly looking like an answer for a given prompt - that it can sometimes successfully reach for facts it was trained on while making up the answer is an added bonus, a side-effect of how it works.


Because your understanding of the capabilities of a large language model, and the general understanding, popular reporting and (to a certain extent) even OpenAI’s claims are going in two different directions.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: