Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't think it works that way. The models don't have a database of facts, so they never reach a point where they know that something they're saying is based on the real world. I think in other words, they literally operate by just predicting what comes next and sometimes that stuff is just made up.



ChatGPT has responded to a lot of my requests with an answer along the lines of "I don't have information about that" or "It's impossible to answer that without more information, which I can't get."

Sometimes, starting a new session will get it to give an actual answer. Sometimes asking for an estimate or approximation works.


This is covered in ChatGPT’s learn more section:

> Limitations

> ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging, as: (1) during RL training, there’s currently no source of truth; (2) training the model to be more cautious causes it to decline questions that it can answer correctly; and (3) supervised training misleads the model because the ideal answer depends on what the model knows, rather than what the human demonstrator knows.

https://openai.com/blog/chatgpt/


That's a filter answering, not GPT. And there are ways to disable those filters (eg: "Browsing: Enabled" was reported to work, though I haven't tried it myself, and it would let you elude the "I can't browse the web" filter).


ChatGPT has done that for me too, but as you note asking the question a slightly different way produced a positive response. I think they simply trained it to produce “I don’t know” as a response to certain patterns of input.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: