Are there plans to back up the "suggested answer", which I presume is LLM generated, by a definitive source? The first question in the demo returned the relevant document you were looking for, but I didn't see this in the search results for the second question.
I'm not sure I would trust a system like this unless I could click through and see the source of the answer I'm reading, and make sure that the LLM is referencing the correct email/document.
This seems to be a common growing pain in places where an AI model is expected to provide authoritative answers - I wonder if (at least in your case) it's possible to use a more traditional fuzzy search algorithm to attempt to find the source, based on the LLM's answer string.
Currently something we're working on with prompt engineering, but love your suggested approach. We'll definitely look into that more -- thanks for sharing.
For now, the search is always generated from the first 5 or so results. So you always have an idea of where it's coming from.
I'm not sure I would trust a system like this unless I could click through and see the source of the answer I'm reading, and make sure that the LLM is referencing the correct email/document.
This seems to be a common growing pain in places where an AI model is expected to provide authoritative answers - I wonder if (at least in your case) it's possible to use a more traditional fuzzy search algorithm to attempt to find the source, based on the LLM's answer string.