It depends on what you mean by "this technology". The ability to generate code from an incomplete specification (in the form of input/ output examples, program traces, natural language specs, etc) has been available for quite a while.
For an example of (more recent) capabilities of program synthesis systems, see this paper on the system ALPS:
The paper starts with a motivating example of learning a datalog program to perform static analysis to detect API misuse, then evaluates the performance of the system on its ability to learn programs for knowledge discovery and program analysis, and SQL queries.
I'll think you'll agree that the programs learned automatically in that paper are every bit as complex as anything we've seen from Large Language Model code generators today. On top of that, systems like ALPS only generate correct code (correct with respect to their examples- either they return a program that correctly relates inputs and outputs in examples, or they report failure). Which is unlike LLMs that will happily generate garbage code that doesn't compile and never know the difference.
What sets LLMs apart as code generators is that you can talk to them in natural language and they will respond with ... something. That capability also is not new, there's been systems generating code from natural language specifications for a while also. The new LLMs are much better at that, however. The usability has gone through the roof. No doubt about that. My mother can write a REST API now, even if she has no more idea what that is than ChatGPT. But the capability to produce correct code has gone through the floor at the same time. Just as if you asked my mother to code you a REST API.
But I'm guessing that the fun of talking to a LLM will trump everything else and program synthesis, which works very well but usually doesn't respond to natural language prompts (though some systems do) will keep flying under the radar of most programmers who will continue to think that all this is brand new and we've made a huge leap ahead in capabilities, when we've really taken a big step back.
For an example of (more recent) capabilities of program synthesis systems, see this paper on the system ALPS:
https://pages.cs.wisc.edu/~aws/papers/fse18b.pdf
The paper starts with a motivating example of learning a datalog program to perform static analysis to detect API misuse, then evaluates the performance of the system on its ability to learn programs for knowledge discovery and program analysis, and SQL queries.
I'll think you'll agree that the programs learned automatically in that paper are every bit as complex as anything we've seen from Large Language Model code generators today. On top of that, systems like ALPS only generate correct code (correct with respect to their examples- either they return a program that correctly relates inputs and outputs in examples, or they report failure). Which is unlike LLMs that will happily generate garbage code that doesn't compile and never know the difference.
What sets LLMs apart as code generators is that you can talk to them in natural language and they will respond with ... something. That capability also is not new, there's been systems generating code from natural language specifications for a while also. The new LLMs are much better at that, however. The usability has gone through the roof. No doubt about that. My mother can write a REST API now, even if she has no more idea what that is than ChatGPT. But the capability to produce correct code has gone through the floor at the same time. Just as if you asked my mother to code you a REST API.
But I'm guessing that the fun of talking to a LLM will trump everything else and program synthesis, which works very well but usually doesn't respond to natural language prompts (though some systems do) will keep flying under the radar of most programmers who will continue to think that all this is brand new and we've made a huge leap ahead in capabilities, when we've really taken a big step back.