It seems clear to me that AIs are going to write more source code going forward. At the moment I review a lot of it, it seems necessary. But it occurs to me the same may have been true of early compilers. Is it the case that early engineers would review and change resulting assembly? That certainly isn't how most of us operate today, so I'm wondering if the same is bound to happen at this new abstraction level. Will we stop reviewing the output and write...english? Why or why not?
Update based on comments: Perhaps by using 'LLM' I was too specific to today's non-deterministic LLM systems..I suppose I meant AI systems in general.
You cannot file a bug report against an LLM that it produced an unexpected output, because there is no expected output; The core feature of an LLM is that neither you nor the LLM developer knows what it will output for a wide range of inputs. I think there are a wide range of applications for which LLMs core value proposition of "no-one knows a priori what this tool will emit" is disqualifying.