Hacker News new | past | comments | ask | show | jobs | submit login

Seems like the option is constantly retrain an LLM or give it enough tools to interact with new data as needed. For the foreseeable future LLM's are going to need RAG or huge context windows which are effectively equivalent to RAG because you are isolating from some giant corpus your useful information you want the machine to meaningfully interact with.



It still seems like a different use case to me, but maybe I'm wrong there.

I don't mind a search algorithm including a summary as long as its true to the original, but I really don't want any generated content that's trying to predict how someone would answer my question. If LLMs end up replacing most of the use of tools like Google Search we really have moved past the discovery problem and don't necessarily want to find or see the original (human authored) content at all.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: