Hi I am Jan, CTO @ Pathway.
A use case we have been working on with LLMs is to let people know when an answer to their query changes due to revisions of source documents. Obviously, we want to avoid periodically re-computing all queries for the LLM.
Why I think it’s cool?
- We don’t spin in a loop to repeat with the LLM.
- Alerts are LLM-deduplicated - no spamming users with typo fixes
- And the best - our framework, Pathway takes care of handling the updates, the example looks nearly like a regular, static RAG chatbot.
More context + GIF of how it works for Google Drive document alerts: https://pathway.com/developers/showcases/llm-alert-pathway
Happy to have your thoughts!
This real-time alerting use case can be also useful in many other areas. I am thinking of fraud detection, customer support, medical diagnosis, and treatment, or in manufacturing to predict when equipment will fail and alert if maintenance is needed. Or even monitoring model performance when LLMs can occasionally produce unexpected or undesirable outputs.