Depending on who you ask, we will never have it or we have it already. We never had a perfect definition of AGI, but I feel like now the definition has become so murky to be useless.
My definition would AGI as an AI embodied in a human robot that can do all the physical and cognitive tasks a regular human can, for example I would assume 90% of humans can be an Uber driver, and so the AGI system should be capable of being an Uber Driver. While perhaps only 5% can be a theoretical physicist or a professional tennis player, so I wouldn’t expect my AGI to be capable of that
I kind of wonder if we'll just evolve past the fascination of it, like maybe the problems we need to solve will become so much easier through ML algorithms, we'll just get past it all without the "G" in AGI and become distracted with new things and leave behind being fascinating with computers and chat bots in general.
and I know this sounds wild to some, even unthinkable, but I can't imagine people sitting around in 2000 years talking about "computers".
We might even get past the idea of needing "computing" as we find new ways of achieving our goals. I mean who knows, maybe we'll learn to augment our own intelligence through biohacking?
Right now it feels like AI is the actual be all and end all of everything, but I like to experiment with think about what's over the horizon.
Some day people will realize that intelligent automaton is an oxymoron. Something designed and built will always operate on rails devised by their creator. Intelligence means the ability of being the creator.
Interestingly, genesis myths usually say god(s) made humans in their image. It's a metaphor for reproduction. We create intelligent beings all the time.
True, though clearing up some of the murkiness is part of what forecast questions like these are for, with their explicitly spelled out resolution criteria.