I think that's a good part of the issue. You have the computer that is doing stuff. And you have the software engineer that was hired to make it do the stuff. And the connection between them is the code. That's pretty much the simplistic picture that everyone has.
But the truth is that the way the computer works is alien and anything useful becomes very complex. So we've come up with all those abstractions, embed them in programming languages with which we create more abstractions trying to satisfy real world constraints. It's an imaginary world which is very hard to depict to other people. It's not purely abstract like mathematics, nor it's fully physical like mechanics.
The issue with LLMs is whatever they produce have a great chance of being distorted. At first glance, it looks like it's being correct, but the more you add to it, the more visible the flaws are until you're left with a Frankenstein monster.
But to your last part, this is why I think the worst fears I see from programmers (here and in real life) are unlikely to be a lasting problem. If you're right - and I think you are - that the direction things may be headed as-is, with increasingly less sophisticated people relying increasingly more on AIs to build an increasingly large portion of software, is going to result in big messes of unworkable software. But if so, people are going to get wise to that, and stop doing it. It won't be tenable for companies to go to market with "Frankenstein monsters", in the long term.
The key is to look through the tumultuous phase and figure out what it's gonna look like after that. Of course this is a very hard thing to predict! But here are the outcomes I personally put the most weight on:
1. AIs might really get good enough that none of us write code anymore, in the same way that it's quite rare to write assembly code now.
In this case, I think entrepreneurship or research will be the way to go. We'll be able to do so much more if software is truly easy to create!
2. We're still writing, editing, and debugging code artifacts, but with much better tools.
In this case, I think actually understanding how software works will be a very valuable skill, as knocking out subtly broken software will be a dime a dozen, while getting things working well will be a differentiator.
Honestly I don't put much weight on the version of this where nobody is doing anything because AI is running everything. I recognize that lots of smart people disagree with me about this, but I remain skeptical.
> AIs might really get good enough that none of us write code anymore, in the same way that it's quite rare to write assembly code now.
I don't have much hope for that, because the move from assembly to higher level programming languages is a result of finding patterns that are highly deterministic. It's the same as metaprogramming currently. It's not much about writing the code to solve a problem, but to find the hidden mechanism behind a common class of problems and then solve that instead. Then it becomes easier to solve each problem inside the class. LLMs are not reliable for that.
> 2. We're still writing, editing, and debugging code artifacts, but with much better tools.
I'd put a lot more weight on that, but we've already have a lot of tooling that we don't even use (or replicate across software ecosystems). I'd care much more about a nice debugger for go than LLMs tooling. Or a modern smalltalk.
But as you point out, the issue is not tooling. It's understanding. And LLMs can't help with anything if you're not improving that.
I probably should have specified: I didn't list those in the order of what I put most weight on. I agree with you that I more heavily weight the one I wrote as #2.
I think you and I probably mostly agree on where things are heading, except that just inferring from your comment, I might be more bullish than you on how much AIs will help us develop those "much better tools".
> It's an imaginary world which is very hard to depict to other people. It's not purely abstract like mathematics, nor it's fully physical like mechanics.
This is one of the reasons I like the movie Hackers - the visualizations are terrible if you take it at face value, but if you think of it as a representation of what's going on inside their minds it works a whole lot better, especially compared to the lines-of-code-scrolling-past version usually shown in other movies/tv.
But the truth is that the way the computer works is alien and anything useful becomes very complex. So we've come up with all those abstractions, embed them in programming languages with which we create more abstractions trying to satisfy real world constraints. It's an imaginary world which is very hard to depict to other people. It's not purely abstract like mathematics, nor it's fully physical like mechanics.
The issue with LLMs is whatever they produce have a great chance of being distorted. At first glance, it looks like it's being correct, but the more you add to it, the more visible the flaws are until you're left with a Frankenstein monster.