No, that's not really what has happened. If you ask a question, it tends to mean you lack knowledge about something, and therefore you don't have enough context to produce a question that perfectly teases out the correct answer from GPT. Instead what happens is that you have to carefully engage in conversation and iterate towards improving GPTs context and your understanding of what it is saying well enough to ask it more incisive questions or to correct obvious inconsistencies in what it has communicated.