But there is an unanswered question of how far this technology can go based on its fundamentals. Coding is much like driving, you can't do 80% and let the human do the final 20%, because that final 20% requires reasoning about a well understood design that was implemented throughout the first 80%.
If your fancy AI coder thingy can't really reason about the end task that the code is solving - and there is little to indicate that it does, or that, any moment now, technology will advance to the point that it will - then the 80% will be crap and there exists no human that can finish the last 20%, not even if they put up 200% of the effort required. We still don't have a working AI solution for driving, a well understood and very limited problem domain, never-mind the infinite domain of all problems that can be explained in natural language and solved with software.
What you end up with is a fancier autocomplete, not an AI coder. Boilerplate and coder output might simply increase to take advantage of the new more productive way of generating source code, just like they did for the last decades whenever there was a "revolutionary" new tech, like high level languages, source control, IDEs and debuggers, component distribution etc. etc.
You’re already limiting your imagination to “coding.”
These are data transformers that can transform raw data without coding at all. At what point does a model itself replace code?
It’s sort of like a CPU, right. You can have hardware that specialized, or general purpose hardware that can do anything once instructed. LLMs have the ability to be general purpose data manipulators without first having to be designed (or coded) to perform a task.
> data transformers that can transform raw data without coding at all
How do you know this is 100% reliable, per upthread discussion?
We've already had this problem with Excel in various sciences, which while deterministic has all sorts of surprising behaviors. Genes had to be renamed in order to stop Excel from mangling them: https://www.progress.org.uk/human-genes-renamed-as-microsoft...
AI promises "easier than Excel, but not deterministic". So more people are going to use it to get less reliable results.
Weird argument. Excel is one of the most popular and profitable programs of all time. If your argument is that LLMs are like Excel, the logical conclusion would be that they would be wildly successful.
There are two contexts in my experience where it's been important to get the numbers exactly right:
1. Cherrypicking sports statistics for newly set records and the like (NB: this is not lucrative)
2. Financial transaction processing
In most other contexts, especially analytics and reporting, nobody cares and nobody is going to check your math, because the consumers are just trying to put a veneer of numeracy on their instincts.
Ok, but then you completely give up the ability for human actors to understand and fine-tune the process. It would necessarily be a stochastic product: we don't know exactly how it works, it seems to output correct results in our testing but we can't guarantee it won't cook your dog in the microwave.
If your fancy AI coder thingy can't really reason about the end task that the code is solving - and there is little to indicate that it does, or that, any moment now, technology will advance to the point that it will - then the 80% will be crap and there exists no human that can finish the last 20%, not even if they put up 200% of the effort required. We still don't have a working AI solution for driving, a well understood and very limited problem domain, never-mind the infinite domain of all problems that can be explained in natural language and solved with software.
What you end up with is a fancier autocomplete, not an AI coder. Boilerplate and coder output might simply increase to take advantage of the new more productive way of generating source code, just like they did for the last decades whenever there was a "revolutionary" new tech, like high level languages, source control, IDEs and debuggers, component distribution etc. etc.