What we are seeing now is a proof of concept, what a highly generic language model can do. I'm sure there is work right now to add a new layer for confidence and truthfulness of answers. I'm not in the field but I guess that can be a another layer that could weight GPT choices in its probability space. Overall we could even have a confidence score for some answers.
I`m very very curious about these approach. In my understanding most of AI models: GPT, StableDiffusion, etc uses statistics approach for learning.
I`m not sure is it really possible to:
- add new layer of confidence,
- ability to provide correct source for claims,
- ability to fine-tune one answer,
in the nearest future.
Example: can we teach GPT to learn that Sun color is cyan without retraining it from zero? So on any question (no matter how sophisticated it is) it always "know" that sun color is cyan? Can we teach GPT anything without retraining it?