>Based on my understanding of the approach behind ChatGPT, it is probably very close to a local maximum in terms of intelligence so we don't have to worry about the fearmongering spread by the "AI safety" people any time soon if AI research continues to follow this paradigm
I hope you appreciate the irony of making this confident statement without evidence in a thread complaining about hallucinations.
It was not a confident statement, at least not the way ChatGPT is confident.
There are multiple ways the commenter conditioned their statement:
> Based on my understanding
> it is PROBABLY very close
The author makes it clear that there is a uncertainty and that if their understanding is wrong, the prediction will not hold.
If ChatGPT did any of the things the commenter did, the problem wouldn't exist. Making uncertain statements is fine as long as it is clear the uncertainty is acknowledged. ChatGPT has no concept of uncertainty. It casually constructs false statements to same way it constructs real knowledge backed by evidence. That's the problem.
I hope you appreciate the irony of making this confident statement without evidence in a thread complaining about hallucinations.