The safety part will probably be either solved or a non-issue or ignored. Similarly to how GPT3 was often seen as dangerous before ChatGPT was released. Some people who have only ever vibe coded are finding jobs today, ignoring safety entirely and lacking a notion of it or what it means. They just copy paste output from ChatGPT or an agentic IDE. To me it's JIT already with extra steps. Or they have pivoted their software engineers to vibe coding most of the time and don't even touch code anymore doing JIT with extra steps again.
In a way he's making sense. If the "code" is the prompt, the output of the llm is an intermediate artifact, like the intermediate steps of gcc.
So why should we still need gcc?
The answer is of course, that we need it because llm's output is shit 90% of the time and debugging assembly or binary directly is even harder, so putting asides the difficulties of training the model, the output would be unusable.
Probably too much snark from me. But the gulf between interpreter and compiler can be decades of work, often discovering new mathematical principles along the way.
The idea that you're fine to risk everything, in the way agentic things allow [0], and want that messing around with raw memory is... A return to DOS' crashes, but with HAL along for the ride.