Actually, this is a really good point. Asking a question means that you have an internal model or the world and you realize that there are some data that is way off. Is it even possible at all to train an LLM to ask a question? This also gives me an idea for the future where most online text is generated by AI: we will play the game of responding questions with questions and then we know we are human!