Is that an instance of a "human in the loop"? Suppose a customer calls customer service and the service agent uses an LLM iteratively to get a good answer rather than the customer frustrating or even misinforming themselves. That seems to be like what you are describing.