But it can work well even if you don't know what you are doing (or don't look at the impl).
For example, build a TUI or GUI with Claude Code while only giving it feedback on the UX/QA side. I've done it many times despite 20 years of software experience. -- Some stuff just doesn't justify me spending my time credentializing in the impl.
Hallucinations that lead to code that doesn't work just get fixed. Most code I write isn't like "now write an accurate technical essay about hamsters" where hallucinations can sneak through lest I scrutinize it; rather the code would just fail to work and trigger the LLM's feedback loop to fix it when it tries to run/lint/compile/typecheck it.
But the idea that you can only build with LLMs if you have a software engineer copilot isn't true and inches further away from true every month, so it kinda sounds like a convenient lie we tell ourselves as engineers (and understandably so: it's scary).
> Hallucinations that lead to code that doesn't work just get fixed
How about hallucinations that lead to code that doesn't work outside of the specific conditions that happen to be true in your dev environment? Or, even more subtly, hallucinations that lead to code which works but has critical security vulnerabilities?
Replace "hallucination" with "oversight" or "ignorance" and you have the same issue when a human writes the code.
A lot of that will come to the prompter's own foresight much like the vigilance of a beginner developer where they know they are working on a part of the system that is particularly sensitive to get right.
That said, only a subset of software needs an authentication solution or has zero tolerance to some codepath having a bug. Those don't apply to almost all of the apps/TUIs/GUIs I've built over the last few months.
If you have to restrict the domain to those cases for LLMs to be "disastrous", then I'll grant that for this convo.
> A lot of that will come to the prompter's own foresight
And, on the current trend, how on earth are prompters supposed to develop this foresight, this expertise, this knowledge?
Sure, fine, we have them now, in the form of experienced devs, but these people will eventually be lost via attrition, last even faster if companies actually do make good on their threat to replace a team of 10 devs with a team of three prompters (former senior devs).
The short-sightedness of this, the ironic lack of foresight, is troubling. You're talking about shutting off the pipeline that will produce these future prompters.
The only way through, I think, will be if (very big if) the LLMs get so much better at coding (not code-gen) that you won't need a skilled prompter.
For example, build a TUI or GUI with Claude Code while only giving it feedback on the UX/QA side. I've done it many times despite 20 years of software experience. -- Some stuff just doesn't justify me spending my time credentializing in the impl.
Hallucinations that lead to code that doesn't work just get fixed. Most code I write isn't like "now write an accurate technical essay about hamsters" where hallucinations can sneak through lest I scrutinize it; rather the code would just fail to work and trigger the LLM's feedback loop to fix it when it tries to run/lint/compile/typecheck it.
But the idea that you can only build with LLMs if you have a software engineer copilot isn't true and inches further away from true every month, so it kinda sounds like a convenient lie we tell ourselves as engineers (and understandably so: it's scary).