Have they been proven wrong? What is the roadmap to get to general purpose AI, and what is the proof that we are close?
The domains where AI experts can beat domain experts is certainly growing, but I don't see how you can get from that to a claim of general AI. I certainly don't see how you can get there from the recent LLMs in particular which can't beat domain intermediates at anything.
The transformer model is one more incremental improvement that made the language problem tractable. This has only captured the public interest because the language problem is so much more flashy and easy to understand than something like protein folding.
AutoGPT style of bots, aka agent universe discovery systems are still obviously nascent - but as someone who generally leverages LLMs for nearly all of my work, I would certainly say they are general purpose.
I would be remiss in saying they are anywhere near average human capability in most areas, but I do worry that my estimation capability is off and that we're closer to the concave part of the S curve than the convex. It just takes a couple more breakthroughs in model/dataset design
So these bots, as they currently exist, are already suitable for:
* Predicting the physical geometry of proteins
* Playing Starcraft
* Playing Go
* Trading stocks
* Controlling motors to make a bipedal robot walk
* Analysing data from particle accelerators
* Analysing data from telescopes
* Detecting cancer in medical images
These are all things that AI can do today. Are you suggesting that we are near a paradigm shift where a single AI system will ve competent in all of these domains? And will further be simmilarly competent in novel domains for which it has not been designed?
>The transformer model is one more incremental improvement that made the language problem tractable. This has only captured the public interest because the language problem is so much more flashy and easy to understand than something like protein folding.
Language is the process of thinking rendered into tangible medium. If you master language, you master cognition. Many researchers recognized long ago that NLP would be one of, if not the, core of AGI.
The domains where AI experts can beat domain experts is certainly growing, but I don't see how you can get from that to a claim of general AI. I certainly don't see how you can get there from the recent LLMs in particular which can't beat domain intermediates at anything.
The transformer model is one more incremental improvement that made the language problem tractable. This has only captured the public interest because the language problem is so much more flashy and easy to understand than something like protein folding.