I don't think so. A lot of useful specialized problems are just patterns. Imagine your IDE could take 5 examples of matching strings and produce a regex you can count on working? It doesn't need to know the capital of Togo, metabolic pathways of the eukaryotic cell, or human psychology.
For that matter, if it had no pre-training, it means it can generalize to any new programming languages, libraries, and entire tasks. You can use it to analyze the grammar of a dying African language, write stories in the style of Hemingway, and diagnose cancer on patient data. In all of these, there are only so many samples to fit on.
Of course, none of us have exhaustive knowledge. I don't know the capital of Togo.
But I do have enough knowledge to know what an IDE is, and where that sits in a technological stack, i know what a string is, and all that it relies on etc. There's a huge body of knowledge that is required to even begin approaching the problem. If you posted that challenge to an intelligent person from 2000 years ago, they would just stare at you blankly. It doesn't matter how intelligent they are, they have no context to understand anything about the task.
> If you posted that challenge to an intelligent person from 20,00 years ago, they would just stare at you blankly.
Depending on how you pose it. If I give you a long enough series of ordered cards, you'll on some basic level begin to understand the spatiotemporal dynamics of them. You'll get the intuition that there's a stack of heads scanning the input, moving forward each turn, either growing the mark, falling back, or aborting. If not constrained by using matrices, I can draw you a state diagram, which would have much clearer immediate metaphors than colored squares.
Do these explanations correspond to some priors in human cognition? I suppose. But I don't think you strictly need them for effective few-shot learning. My main point is that learning itself is a skill, which generalist LLMs do possess, but only as one of their competencies.
Well Dr. Michael Levin would agree with you in the sense that he ascribes intelligence to any system that can accomplish a goal through multiple pathways. So for instance the single-celled Lacrymaria, lacking a brain or nervous system, can still navigate its environment to find food and fulfill its metabolic needs.
However, I assumed what we're talking about when we discuss AGI is what we'd expect a human to be able to accomplish in the world at our scale. The examples of learning without knowledge you've given, to my mind at least, are a lower level of intelligence that doesn't really approach human level AGI.
A lot of useful specialized problems are just patterns.
It doesn't need to know the capital of Togo, metabolic pathways of the eukaryotic cell, or human psychology.
What if knowing those things distills down to a pattern that matches a pattern of your code and vice versa? There's a pattern in everything, so know everything, and be ready to pattern match.
If you just look at object oriented programming, you can easily see how knowing a lot translates to abstract concepts. There's no reason those concepts can't be translated bidirectionally.
For that matter, if it had no pre-training, it means it can generalize to any new programming languages, libraries, and entire tasks. You can use it to analyze the grammar of a dying African language, write stories in the style of Hemingway, and diagnose cancer on patient data. In all of these, there are only so many samples to fit on.