But it doesn't learn its error, that's the whole problem.
It only responds to 'accusations' from user in the most common way, which is 'apologies-like'.
The weight of phrases like "you are wrong" is in fact so strong, that it fools the chatGPT to apologize for its 'mistakes' even in the scenarios where its text was obviously correct - like telling it 2+2 doesn't equal 4
Well yeah, it's an imperfect tool, and you have to treat it as such. Probably there's a lot to be discovered about how to use it most effectively. I just don't find that it's more problematic than the other tools in my box.
Sure, grep has never flat out lied to me the way chatGPT does, but it's a statistical model, not a co-worker, so I don't feel betrayed, I just feel... cautioned. It keeps you on your toes, which isn't such a bad state to be in.
The weight of phrases like "you are wrong" is in fact so strong, that it fools the chatGPT to apologize for its 'mistakes' even in the scenarios where its text was obviously correct - like telling it 2+2 doesn't equal 4