As context on Ilya's predictions given in this talk, he predicted these in July 2017:
> Within the next three years, robotics should be completely solved [wrong, unsolved 7 years later], AI should solve a long-standing unproven theorem [wrong, unsolved 7 years later], programming competitions should be won consistently by AIs [wrong, not true 7 years later, seems close though], and there should be convincing chatbots (though no one should pass the Turing test) [correct, GPT-3 was released by then, and I think with a good prompt it was a convincing chatbot]. In as little as four years, each overnight experiment will feasibly use so much compute capacity that there’s an actual chance of waking up to AGI [didn't happen], given the right algorithm — and figuring out the algorithm will actually happen within 2–4 further years of experimenting with this compute in a competitive multiagent simulation [didn't happen].
Being exceptionally smart in one field doesn't make you exceptionally smart at making predictions about that field. Like AI models, human intelligence often doesn't generalize very well.
No, very few for things with this much uncertainty.
Most of it is survivorship bias: if you have a million people all making predictions with coin flip accuracy, somebody is going to get a seemingly improbable number correct.
> 2/3/4 will ultimately require large amounts of capital. If we can secure the funding, we have a real chance at setting the initial conditions under which AGI is born.
But isn't that part of the problem? Some of the brightest minds in the field's public statements are filtered by their need to lie in order to con the rich into funding their work. This leaves actual honest discussions of what's possible on what timelines to mostly be from people who aren't working directly in the field, which inclines towards people skeptical of it.
Most the people who could make an engineering prediction with any level of confidence or insight are locked up in businesses where doing so publicly would be disastrous to their funding, so we get fed hype that ends up falling flat again and again.
The opposite of this is also really interesting. Seemingly the people with money are happy to be fed these crazy predictions regardless of their accuracy. A charitable reading is they temper them and say “ok it’s worth X if it has a 5% chance of being correct” but the past 4 years have made that harder for me to believe.
To be honest, I think some of it is what you suggest - a gamble on long odds, but I think the bigger issue is just a carelessness that comes with having more money than you can ever effectively spend in your life if you tried. If you're so rich you could hand everyone you meet $100 and not notice, you have nothing in your life forcing you to care if you're making good decisions and not being conned.
It certainly doesn't help that so many of the people who are that rich got that rich by conning other people this exact way. It's an incestuous cycle of con-artists who think they're geniuses, and the media only slavishly supports that by treating them like they're such.
It is important to note the context: he it was in a private email to an investor with vested interests in those fields, and someone who is also prone to giving over-optimistic timelines ("Tobo-taxis will be here next year, for sure" since 2015)
> Within the next three years, robotics should be completely solved [wrong, unsolved 7 years later], AI should solve a long-standing unproven theorem [wrong, unsolved 7 years later], programming competitions should be won consistently by AIs [wrong, not true 7 years later, seems close though], and there should be convincing chatbots (though no one should pass the Turing test) [correct, GPT-3 was released by then, and I think with a good prompt it was a convincing chatbot]. In as little as four years, each overnight experiment will feasibly use so much compute capacity that there’s an actual chance of waking up to AGI [didn't happen], given the right algorithm — and figuring out the algorithm will actually happen within 2–4 further years of experimenting with this compute in a competitive multiagent simulation [didn't happen].
Being exceptionally smart in one field doesn't make you exceptionally smart at making predictions about that field. Like AI models, human intelligence often doesn't generalize very well.