Hacker News new | past | comments | ask | show | jobs | submit login

To keep control in the hands of the analyst, we've been working on UX's over agentic neurosymbolic RAG in louie.ai --

Ex: "search for login alerts from the morning, and if none, expand to the full day"

That requires generating a one-shot query combining semantic search + symbolic filters, and an LLM-reasoned agentic loop recovering if it turns up not enough such as a poorly formed query around 'login alerts' and the user's trigger around 'if none'

Likewise, unlike Disneyified consumer tools like chatgpt and perplexity that are designed to hide what is happening, we work with analysts who need visibility and control. That means designing search so subqueries and decisions flow back to the user in an understandable way: they need to inspect what is happening and be confident they missed nothing, and edit via natural language or their own queries when they want to proceed

Crazy days!




Very good to hear this. As a data analyst, I have tended to dismiss LLMs as irrelevant because of the black-box mentality. In some industries, this even holds back adoption of technology that is now considered mature and boring, like tree ensembles in machine learning.


Once tree ensembles get big enough to handle the kinds of problems that LLMs can address, are they really more interpretable?


Yup, this is the way. Natural language is an excellent search/specification front end. But without disambiguation and perfect clarity on how a query was interpreted, you cannot trust a black box for real work.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: