If you had witnessed the Wright brothers' first flight you would have had a hard time predicting that within 60 years we would be landing on the moon. And if you were playing computer games in the 1980s in a slow DOS box you could not have even imagined modern GPUs and modern games. Nor could we have predicted how far modern CPUs have come since the 1960s.
LLMs are a very new technology and we can't predict where they'll be in 50 years, but I, for one, I'm optimistic about it. Mostly because it is very much not GAI so its impact will be lower. I don't think it will be more revolutionary than transistors or personal computers, but it will be impactful for sure. There's a large cost of entry right now but that has almost always been the case for bleeding-edge technologies. And I'm not too worried about these private companies having total control over it in the long run. They, so far, have no moat but $$$ to spend on training, that's it. In this case it really is just math.
I think market forces will drive a breakthrough in training at some point, either mathematical (better algorithms) or technological (better training hardware). And that will reduce the moat of these companies and open the space up even more.
When we look at the history of technology, most new technologies can take a few decades before they're ready for the mass market. Television was demonstrated in the 1920s, but it wasn't ready for the mass market until the 1950s. If I remember correctly, the light bulb and automobile also took a few decades to come to the mass market.
I think a lot of people are jaded by how fast technology changed between 1980 -> 2010. But, a lot of that is because the technology was easy to learn, understand, and manipulate.
I suspect that AI will take a lot longer to evolve and perfect than the World Wide Web and smartphones.
Yeah, the rate of change since the 1960s has been incredible. I like to think of that as being a direct consequence of inventing an infinitely reconfigurable general purpose calculator :)
I don’t know about LLMs… it might hit a performance peak and stay there, same as CPUs have been 3+ GHz for the past 10 years. Or there might come a breakthrough that will make them incredibly better, or obsolete them. We don’t know! And I find that exciting.
The personal computer & the internet clicked for me because I saw them as personal enables, as endlessly flexible systems that we could gain mastery over & shape as we might.
But with AI? Your comparison to going to the moon feels apt. We're deep into the age of the hyper scalers, but AI has done far more to fill me with dread & make me think maybe Watson was right when he said:
> I think there is a world market for maybe five computers
This has none of the appeal of computing that drew me in & for decades fed my excitement.
As for breakthroughs, I have much doubts; there seems to be a great conflagration of material & energy being poured in. Maybe we can eek out some magnitudes of efficiency & cost, but those gains will be mostly realized & used by existing winners, and the scope and scale will only proportionately increase. Humanity will never catch up to the hyper-ai-ists.
> The personal computer & the internet clicked for me because I saw them as personal enables, as endlessly flexible systems that we could gain mastery over & shape as we might.
I feel the same way. Developments in computing have evolved and improved incrementally until now. Networks and processors have gotten faster, languages more expressive and safer, etc but it’s all been built on what preceded it. Gen AI is new-new in general purpose computing - the first truly novel concept to arrive in my nearly 30 years in the field.
When I’m working in Python, I can “peer down the well” past the runtime, OS and machine code down to the transistors. I may not understand everything about each layer but I know that each is understandable. I have stable and useful abstractions for each layer that I use to benefit my work at the top level.
With Gen AI you can’t peer down the well. Just a couple of feet down there’s nothing but pitch black.
> and if you were playing computer games in the 1980s in a slow DOS box you could not have even imagined modern GPUs and modern games.
I think this bit is really not the case, FWIW.
If you look at what computer magazines were like in the 1980s it's very clear that people were already imagining what photorealism might look like, from the very earliest first-person-perspective 3D games (which date back to the early 1980s if not earlier)
Not really a fair comparison but children learn to speak with barely any training data compared to LLMs. I’m hopeful a large training corpus will not be so necessary in the future.
I suspect that we won't have the computing power or neurological understanding to create such an AI anytime soon. Even if human thought can be reduced to networks of chemical-filled membranes, the timescale and population involved in natural selection, and the resources consumed to live and reproduce are immense. I think we would need to find a far more efficient scheme to produce emergent intelligence.
I would argue that by the time a child learns to speak at the level of an LLM (college) they have been exposed to an enormous amount of training data through all the their sensory inputs, just as a result of daily living and interactions.
LLMs are a very new technology and we can't predict where they'll be in 50 years, but I, for one, I'm optimistic about it. Mostly because it is very much not GAI so its impact will be lower. I don't think it will be more revolutionary than transistors or personal computers, but it will be impactful for sure. There's a large cost of entry right now but that has almost always been the case for bleeding-edge technologies. And I'm not too worried about these private companies having total control over it in the long run. They, so far, have no moat but $$$ to spend on training, that's it. In this case it really is just math.
I think market forces will drive a breakthrough in training at some point, either mathematical (better algorithms) or technological (better training hardware). And that will reduce the moat of these companies and open the space up even more.