I'd actually go further and say it's better to assume I can be deceived by confident language, than to assume that this is a problem with "dumb people".
If I see other people making a mistake, I want my first question to be "am I making the same mistake?". I don't live up to that aspiration, certainly.
Part of what's extremely frustrating about talking to people about AI is that people say things which seem like they understand what's going on in one breath, and then in the next breath say something that makes absolutely no sense if they understood what they just said. This post is a great example of that.
Okay, so you apply epistemology to ChatGPT: when you ask ChatGPT a question, how does it know the answer? The answer is: it doesn't know the answer. All it knows is how people string words together: it doesn't have any understanding of what the words mean.
So no, it can't use the Socratic method on itself or anyone. It can't ask questions to stimulate critical thinking, because it's incapable of critical thinking. It can't draw out ideas and underlying presuppositions, because it doesn't have ideas or underlying presuppositions (or suppositions). It's not even capable of asking questions: it's just stringing together text that matches the pattern of what a question is, without even a the understanding that the text is a question, or that the following text in the pattern is an answer. "Question" and "answer" are not concepts that ChatGPT understands because ChatGPT doesn't understand concepts.
The Socratic method requires self-awareness, otherwise the questions fall flat. Unfortunately I think that there is a greater chance that an LLM will become capable of this before a majority of humans will.
If I see other people making a mistake, I want my first question to be "am I making the same mistake?". I don't live up to that aspiration, certainly.