I have, nothing in it had any substance whatsoever. Reinforcing my point that it’s just a cover piece for whatever boardroom argument is to be made for or against AI (or to quote the article: the “digital future”)
The dominant shift in the internet of the last ten years was monopoly consolidation and a dramatic reduction in access to information since almost all of it is behind their walls.
Mobile has extended these walls with mafia racketeering app stores and absolutely atrociously designed gambling-addict games, information stealing, and outright fraud.
The dream of the internet died completely about five years ago.
This overpaid google shill isn't worth listening to. Google had about a decade of plausable deniability on "do no evil". They stopped the charade, and we know what they are. Like sociopaths always do, they tell you and you should listen.
AI is an even more intrusive, dangerous penetration and control of our lives, and a massive power grab by these tools of the very very very elite few.
The guy is right: the internet WAS about access to the evolving body of human knowledge, WAS about sci-fi level capabilities and conveniences, and WAS a miracle.
WAS.
AI is the rocket fuel that accelerates the vector from the momentary point of near-idealism to the dystopian corporate control we are currently mostly in, that as a DOUBLE BONUS serves as fine grained total information awareness for all the state actors of the world, and likely a worldwide destruction of the last semblances of democracy.
Just drop out. The old dream is still alive and the earlier you quit caring what SV hypecycle dorks and TikTok brainslop fiends think and/or do the earlier you get to live that dream. In absolute terms, there are probably more people that want what you do now than there has been at any point in the past. You're just butting up against how little the average person cares, but that's their cross to bear. Don't let it ruin what you have.
As per this 5th epoch seems to an answering machine for users whereas all compute is owned out cloud vendors. All efficiency gains is to be accrued by cloud companies and users can pay less(due to competition) or more (due to rise in costs) depends on which way wind blows.
It's amazing how every few years we get another article claiming Von Neumann architecture can't cut it, we're not measuring computers right, and we're going to have to give up general-purpose computing any day now.
Sometimes they change the order, but it's always the same claims, and it's always the 'unique' circumstances of (insert date here) that make everything different this time. I wonder why they bother?
I wrote that before the AI revolution, and I honestly don't know what's going to happen. The future could go one of two ways:
1. AI just becomes a smarter compiler. We programmers still "write" code, though maybe at a higher level. AI helps us deal with complexity, but we're still the ones designing UX, architecture, etc.
2. AI becomes the front-end layer for everything, so that users only interact with an AI and that AI carries out tasks as needed. All other software is just an implementation detail (the way we software people think about hardware).
Most likely we will start with #1 and evolve into #2.
I'm not so sure about either of these, but especially #2.
Engineers need to control the behavior of their software very precisely. The fantasy of firing all your engineers and just letting product folks write something extremely high-level like "Write me an app for reviewing movies" and letting a language model fill in all the details seems like a nonstarter.
First, the details it imagines likely won't be what you actually want. So they'll have to start getting more and more precise. So precise in fact that it's back to "programming" instead of something like actual natural language.
Second, this seems unlikely to provide stability over time. Trying to evolve your FE, BE and storage to remain compatible between one prompt and the next or one language model and the next likely will involve specifying those details fully as well.
Natural language just isn't precise enough for this. There is way too much inference involved. This situation is fundamentally different from abstractions that allow one group of engineers to encapsulate implementation details within a system.
Today, a customer tells a programmer what to build using natural language. The customer doesn't write a precise spec; instead, they iterate with the programmer: "no, add a button here.", "yes, but make sure you can decline an order.".
Imagine you're a non-technical person and you communicate with a programmer only via email. You could still get your product built.
Is it really hard to imagine than an LLM could (someday) be on the other side of the email?
As for stability, I think that's something that LLMs have an advantage in. Imagine the user and the LLM create a set of regression tests. The LLM can patiently run the tests on every change.
The goal isn't perfection (LLM reads my mind and instantly creates a program). The goal is better/faster/cheaper than dealing with a human programmer.
Believe me, I know LLMs today are not there yet. The question is how long will they take to get there? My guess is 10-20 years, and I'm probably a pessimist.
i see what you mean. but I wonder if this is because we have difficulty imagining a totally possible thing. i mean, surely at one point it must have seemed like less machine code engineers, and more script monkeys writing something extremely high level, was a nonstarter. or perhaps firing system software engineers and just letting FE devs write something extremely high level must have seemed like a nonstarter at one point.
but here we are.
> natural language just isn't precise enough for this
i often make the mistake of thinking business needs something more precise than what it is actually asking for (or what it truly needs). of course precise dollars and cents truly matter in some situations, but in other situations a crud app or report has a great deal of flexibility in its requirements. not willing to die on this hill, but i wonder if precise software and precise computer modelling is because the tools require it to be precise? and if so, a less precise interface tool may well lead to less precise inputs that still meet the underlying business requirements.
A waypoint, not an end goal, because surely someone will derive sellable business value from combining things in a world where AI is the near-universal front layer.
> Complexity will only increase in the years ahead, essentially requiring new declarative programming models focused on intent, the user, and business logic.
This line of thought has never occurred to me before. The idea that complexity will be so unmanageable that we’ll need to rewrite the computing stack in a more easily debuggable, functional way. Modern computing seems simple: The kernel and your application on top of it, but the existence of solutions like Antithesis suggests something different.
It's not distributed if it's all owned by the same corporation. If anything distributed computing has gone backwards the last decade. This switch to centralization was mostly forced by people using low functionality dumb terminals (smart phones) that users don't control and that literally can't hold a TCP connection open.
> This switch to centralization was mostly forced by people using low functionality dumb terminals (smart phones) that users don't control and that literally can't hold a TCP connection open.
Not being able to hold a TCP connection open is an implementation choice of the OS developer (and some choices at the network level too). At WhatsApp, we would find Nokia Symbian devices with 45 day old TCP connections to our servers anytime we looked. Nokia S40 isn't a smart phone platform, but it could hold long connections too. Old versions of Android could do it, although old versions of Android had trouble switching between wifi and cellular. You can probably do it on current Android if you turn off all the Doze or whatever stuff (which isn't always easy to turn off).
There's an efficiency argument for having a single long TCP connection for platform push, rather than one per app that needs it, of course. But in that case, platform push better be reliable.
I didn't mean it was infeasible. I meant holding open TCP connections are avoided by choice to save battery power because transmitting wirelessly is a power drain. But also for all wireless there's a random round trip time and so exponential backoff. The longer you hold open the more likely. Almost all smart phones and their software are gimped in this way; the outliers you mention are interesting but not significant.
> Complexity will only increase in the years ahead, essentially requiring new declarative programming models focused on intent, the user, and business logic.
This line of thought has never occurred to me before. The idea that complexity will be so unmanageable that we’ll need to rewrite the computing stack in a more easily debuggable, functional way. Modern computing seems simple: The kernel and your application on top of it, but the existence of solutions like Antithesis suggests something different.
I think that the engineers who began their careers in the first epoch are going to turn out to be some of the most productive and technically skilled having gone through each epoch which, frankly, grew easier to deal with every time.
They call that point out, basically, but I had no idea it'd be a 100x improvement.
> While we cannot predict the breakthroughs that will be delivered in this fifth epoch of computing, we do know that each previous epoch has been characterized by a factor of 100x improvement in scale, efficiency, and cost-performance, all while improving security and reliability.
I'm worried about 5th gen epoch engineers. On one side there is a massive amount of smart rust programmers who are super young in all the rust communities just as an example and on the other a lot of people who don't know anything beyond a gui operating system and have never owned a computer and do everything by phone.
I've been happily teaching linux quite a bit lately.
I've also recently convinced myself that the 5th Epoch is coordinated attempt by marketing professionals inside of Silicon Valley companies to encourage a generation of programmers to not learn anything about computers or how to program them.
In the future the only thing you the engineer will do with a computer is type into a text area, hit the submit button, and be returned variations on strings of text that seem to say something important but upon careful considering do not mean anything at all.
We can only hope the next epoch will bring about some kind of enlightenment that will herald a new era of blog posts that convey information across the Internet.
Is it really necessary to learn things in depth when AI can do it for you? e.g. Recent Gemini 1.5 result of learning a language just from grammar book? If AI can do this, can't it be expected to excel at lower-level stuffs trivially which are more deterministic than a human language. After all, we don't learn / teach assembly as our first programming language.
Not as the first language, but as a mandatory part of any respectable CS curriculum. Not unlike a software engineer that doesn't know (and doesn't learn) assembly, writing code with an AI producing output the human doesn't understand means that whole classes of bugs are un-debug-able. I have a little faith that AIs will be as reliable as modern compilers, but who knows what the future holds.
I've noticed the same. I'm convinced that the generation that is under 16 today will on average know a lot less about computers compared to 20 years ago.
There have always been and will always be outliers that are super into anything. My point was that the kids who didn't really care about computers when I was in school still knew a lot more about computers than the kids today who don't really care about computers
20 years from now your son is founder/CEO of a large gaming company and my son a union worker raising slogan outside offices "Free gaming credits are Human rights!"
Unfortunately your son is uncommon and for every kid who "counts in binary" there are dozens of kids in Juvenile Hall, on food stamps, no job, government handouts, sub-100 IQs. Your son may generate untold wealth and a portion of that will go to feeding the masses.
Eugh, this article is full of MBA-speak, not something I'd expect from a highly technical Google Fellow.