Bluffing is the wrong word. It assumes that the LLM is capable of knowingly telling falsehoods.
Instead it always has full confidence in whatever it's last "thought" must be correct, even when asked to double check it's own output, even in a loop. It'll either doubledown on a falsehood or generate a new result and have complete confidence that it's last result was wrong and that it's new result must be correct, even though the initial confidence was the same.
> By comparison, modern C# is memory safe all the way down, possible to implement even very complicated data structures without a single line of unsafe code
"Confidence" is the wrong word. Confidence is a capability of thinking, sentient beings, and LLMs are not that.
I say this to point out the futility in your post; "bluffing" is a perfectly fine thing for the GP to say, because it gets the point across. All of us here should know that an LLM can't actually bluff. (And if you'd prefer a different term, remember that not everyone on HN speaks English as their first language, and may not immediately come up with a more appropriate way of phrasing it.)
Instead it always has full confidence in whatever it's last "thought" must be correct, even when asked to double check it's own output, even in a loop. It'll either doubledown on a falsehood or generate a new result and have complete confidence that it's last result was wrong and that it's new result must be correct, even though the initial confidence was the same.
"Bluffing" certainly isn't the right word.
"Hallucinating" fits much better.