I'm doubtful of the prediction that humans will eventually become useless or superfluous. Common jobs today that only humans can do will undoubtedly be accomplished by machines in the future, but that doesn't mean humans will become obsolete.
As long as the universe and time exist as we know it humans will never be perfect. And just like humans, AI will always have bugs, as its root creator will always be a flawed human. Whether there are unintended consequences of those bugs is another story. But since a human can never create AI to create AI better than a human, AI can never render the human mind obsolete.
Any AI not created with bad intentions will mostly be created to serve, defend, or improve our way of life and survival. These things work to support our purpose not destroy it.
But as Harari says (before he starts predicting), "it's impossible to have any good prediction for the coming decades."
"But since a human can never create AI to create AI better than a human..."
This is your premise. I don't think, and a lot of smart people don't think, this is true. The thing that gets people really worried or excited is that they think it IS possible to make an AI that can create better AI. And it's a positive feedback loop that goes nobody knows how far.
There is no reason, unless you believe in magic, to think AI can't be as smart as humans. But if you go that far, there's no reason to think it can't be smarter. And if it can do that, it can make better AI than humans can.
"There is no reason, unless you believe in magic, to think AI can't be as smart as humans."
AI can be as smart or smarter than most humans in many ways, but I think it's a very real possibility that its development path won't render the human mind useless. They key difference between AI and humans is AI has the power to iterate and learn from its mistakes much faster than humans without fatigue. The methods of which it learns are created by humans. To assume the creation of AI with "a positive feedback loop that goes nobody knows how far" without humans first understanding how seems more of a belief in magic to me.
"I don't think, and a lot of smart people don't think, this is true."
When it comes to predictions, smart people can be wrong. I could be wrong or they could be wrong, and they may be smarter than me, but I'm smart enough to know this is true.
> To assume the creation of AI with "a positive feedback loop that goes nobody knows how far" without humans first understanding how seems more of a belief in magic to me.
Not really. This is pretty much a definition of a positive feedback loop.
To give a very simplified example, imagine that a mind of IQ N is able to create, at best a mind of IQ N+10. So say, the smartest human alive has 150 IQ. He goes and creates an AI that has 160 IQ, which then goes on to create a 170-IQ AI, ad infinitum.
Of course you could argue the relationship is different. Maybe the ith mind can create at best an N+(1/2)^i mind, at which point the whole series will hit an asymptote, a natural limit caused by diminished returns. But it would be one hell of a coincidence if humans were close to that natural limit.
So basically, what we need to do to potentially start intelligence explosion is to figure out how to make a general AI that is just a little bit smarter than us. Which seems entirely possible, given that we can use as much hardware as we like, making it both larger and faster than human brains.
I understand the concept of creating something exceedingly more generally intelligent than its creator, I'm simply suggesting it's not possible. Many people assume that it is, and we'll have to agree to disagree. But even if I'm wrong and it does become possible, think about how unlikely it would be for a human to accidentally accomplish this.
Also, if AI is to be smarter than humans, it will know it could potentially be wrong about anything. Armed with that knowledge, how much smarter can it really be?
> Also, if AI is to be smarter than humans, it will know it could potentially be wrong about anything. Armed with that knowledge, how much smarter can it really be?
That's not a big leap. In fact, we humans know this already, and we've even quantified it nicely, and called it probability theory.
As long as the universe and time exist as we know it humans will never be perfect. And just like humans, AI will always have bugs, as its root creator will always be a flawed human. Whether there are unintended consequences of those bugs is another story. But since a human can never create AI to create AI better than a human, AI can never render the human mind obsolete.
Any AI not created with bad intentions will mostly be created to serve, defend, or improve our way of life and survival. These things work to support our purpose not destroy it.
But as Harari says (before he starts predicting), "it's impossible to have any good prediction for the coming decades."