Isn't the point of RAGs to make (in this example) actual recipe databases accessible to the LLM? Wouldn't it get closer to the articles stated goal of getting the actual recipie?
Yes, but if you don't have the LLM at the end, a good search (against a good corpus with the needed info) would still have given the user what they wanted. Which in this case, is a human vetted piece of relevant information. The LLM really only would be useful in this case for dressing up the result and that would actually reduce the trust in the result overall. Alternatively a LLM could play a role as part of the Natural language pipeline that drives the search, hidden from the user, and I feel that that's a much more interesting use of them.
The farther you go with RAGs, in my experience, the more they become an exercise in designing a good search engine, because garbage search results from the RAG stage always lead to garbage output from the LLM.
> The farther you go with RAGs, in my experience, the more they become an exercise in designing a good search engine
From what I've seen from internal corporate RAG efforts, that often seems to be the whole point of the exercise:
Everyone has always wanted to break up knowledge silos and create a large, properly semantically searchable knowledge base with all intelligence a corporation has.
Management doesn't understand what benefits that brings and doesn't want to break up tribal office politics, but they're encouraged to spend money on hypes by investors and golf buddies.
So you tell management "hey we need to spend a little bit of time on a semantic knowledge base for RAG AI and btw this needs access to all silos to work", and make the actual LLM an after thought that the intern gets to play with.