This is actually a great idea for improving the reliability of answers from a language model. I have also had trouble with GPT3 hallucinating knowledge. One could automate asking the model for a reference and then checking if the reference exists, then changing the prompt and demanding another answer if it doesn’t, until it does. I will explore this method. Thanks again!