Are there any effective ways to add extra knowledge to an LLM, ways that are more than just demos or proofs of concept?
For example, could there be a site like HN with ten thousand contributors where the contributions are changes to an LLM rather than posts and comments?
One issue is that if contribution A contradicts contribution B, then on HN the contradiction presents no problem (i.e., two HN comments can and often do contradict each other just fine) whereas AFAICT the LLM will need to resolve the contradiction somehow to give coherent answers on the subject matter of the contributions A and B. Then again I suppose the LLM's answer could take the form, "opinions on [subject] vary, with some maintaining that . . . whereas others claim that . . ."
This is a solved problem. The answer is to add extra relevant information to the context as part of answering the user's prompt.
This is sometimes called RAG, for Retrieval Augmented Generation.
These days the most convincing way to do this is via tool calls.
Provide your LLM harness with a tool for running searches, and tell it to use that tool any time it needs additional information.
A good "reasoning" LLM like GPT-5 or Claude 4 can even handle contradictory pieces of information - they can run additional searches if they get back confusing results and work towards a resolution, or present "both sides" to the user if they were unable to figure it out themselves.
The fact that adding in information to support a query at inference time has no guarantee to be limited by that information. Besides my 15 years of natural language processing experience, my first hand experience is non deterministic output using that technique because the math simply requires it to be. I know we only trade in anecdotes because the studies, perhaps considered more rigorous collections of anecdotes, like the BBC studies on summaries show that they cannot summarize only synthesize, so... I guess I mean to say that while I can't rule it out that some day it may be possible, or even that it works for some people in some contexts most of the time, there is no study to point to that shows anything resembling a "solution." Perhaps you meant something like "there is a prevailing technique" not that it is a solved problem.
Maybe "solved problem" is an overly strong statement here. I was responding to:
> Are there any effective ways to add extra knowledge to an LLM, ways that are more than just demos or proofs of concept?
Adding to the context is certainly "effective" and more than just a proof-of-concept/demo. There are many production systems out there now using context-filling tools, most notably GPT-5 with search itself.
I do think it's only recently (this year) that models got reliable enough at this to be useful. For me it was o3 that first appeared strong enough at selecting and executing search tools for this trick to really start to shine.
Since this is an anecdote, it is unfalsifiable, and can't support these softened claims, either. The endless possibilities of incorrect contexts inferred from simply incomplete or adjacent contexts would prevent the ability to manage information quantity versus quality. I want to continue to provide my own unfalsifiable anecdote though, that all of this is a way to just name a new rug to sweep the problems under, and feels to me the kind of problem that if we knew how to solve we wouldn't use these models at all.
One mistake people make is to preferably close questions immediately. One should in stead leave them all open until a situation arrises where your actions should [unavoidably] depend on "knowing" the answer.
For example, could there be a site like HN with ten thousand contributors where the contributions are changes to an LLM rather than posts and comments?
One issue is that if contribution A contradicts contribution B, then on HN the contradiction presents no problem (i.e., two HN comments can and often do contradict each other just fine) whereas AFAICT the LLM will need to resolve the contradiction somehow to give coherent answers on the subject matter of the contributions A and B. Then again I suppose the LLM's answer could take the form, "opinions on [subject] vary, with some maintaining that . . . whereas others claim that . . ."