He had AGI. He still has it. You can have it too, for a $20/month subscription from OpenAI or Anthropic.
It just turns out that AGI (artificial general intelligence, which ChatGPT objectively is), is not a singularity nor even a tipping point towards causing one.
How is ChatGPT „objectively“ AGI? You don’t think there are common intellectual tasks humans are better at than ChatGPT? Can ChatGPT do math better than a human without specific prompting to use a calculator (function calls)?
AGI means what it says on the tin: Artificial General Intelligence. As opposed to narrow AI, which most pre-transformer AI research was, that can only do specific categories of tasks. ChatGPT is a general-purpose NLP interface to a model which is able to be solve any category of problem that can be expressed through human language, which as far as we know is full generality.
What you are getting confused about is that there is a segment of people that have purposefully confused / conflated ASI (artificial SUPER intelligence) with AGI, due to their own beliefs about the hard-takeoff potential of AGI. This notably includes the founders of OpenAI and Anthropic, and early backers of OpenAI prior to the release of ChatGPT. Through their success they've set the narrative about AGI, but they are doing so by co-opting and redefining a term that has a quarter century of history behind it.
The core mistake started with Bostrom, although to his credit he is very careful to distinguish ASI from AGI. But he argued that once you had AGI (the ability to apply intelligence to solving any problem) you would rapidly get ASI through a process of rapid iterative design. Yudkowsky took those ideas a step further in his FOOM debate with Robin Hanson in 2008, in which he argued that the AGI -> ASI transition would be short, measured in months, weeks, days, or even mere hours. In 2022, six months prior to the release of ChatGPT, Yudkowsky has a public meltdown in which he asserted there is no time left and we're all going to die, after having been privately shown an early version of ChatGPT.
It's almost two years since the release of ChatGPT, which is absolutely AGI by the definition of the people who coined the term and have run the AGI Conference series for ~25 years, or by the definition of Bostrom or Yudkowsky for that matter. ChatGP is general intelligence, and it is artificial. Two years of AGI, yet we are still here and there is no ASI in sight. Yudkowsky was wrong.
Yet OpenAI, which was founded by a bunch of Yudkowsky acolytes, is still out there saying that "AGI" will bring about the singularity, because that was the core assumption underlying their founding and the reason for the investments they've received. They can get away with this without changing their message because they've subtly redefined "AGI" to mean ASI, and are hoping you don't notice the slight of hand.
I never understood AGI to mean "better than humans". A lot of people assumed it would easily be made to be, by simply throwing more silicon at it until it was, but being smarter than humans isn't what makes it "AGI".
Put this another way, suppose we create a computer program that is only as smart as a bottom 10% human (I'm not saying they have.) You can't reasonably say that is smarter than humans generally. But are you comfortable saying that bottom 10% humans lack any general intelligence at all? General intelligence doesn't mean extreme intelligence, and so Artificial General Intelligence doesn't either. You might say that the term is more than the sum of the parts, which is fair, but I still dispute that superhuman abilities was ever part of the definition of AGI. It was just a (thusfar) failed prediction about AGI.
By that token, you could find non-human animals that are smarter than some percentage of humanity in a few tasks. Are those animals AGI?
Now you could find a software that is smarter than some percentage of humanity in a few tasks. Is that software AGI? Is AlphaGo AGI? Is the Google Deep mind AI gamer AGI?
My definition and the one I found on Wikipedia is „AGI […] matches or surpasses human cognitive capabilities across a wide range of cognitive tasks.“. Being better that the bottom 10% of humans on some tasks doesn’t really qualify to me.
It just turns out that AGI (artificial general intelligence, which ChatGPT objectively is), is not a singularity nor even a tipping point towards causing one.