> Humans are confidently wrong about all kinds of things all the time
there is a qualitative difference: humans may be wrong about facts because they think they are true, ChatGPT is wrong because it does not know what anything means. You cannot fix that, because it's just the way LLMs work.
For example, if asked about a URL for something, a human may remember it wrongly, but will in general say "I don't know, let me check", while ChatGPT will just spew something.
there is a qualitative difference: humans may be wrong about facts because they think they are true, ChatGPT is wrong because it does not know what anything means. You cannot fix that, because it's just the way LLMs work.
For example, if asked about a URL for something, a human may remember it wrongly, but will in general say "I don't know, let me check", while ChatGPT will just spew something.