With OpenAi (and other comparable technologies) able to semantically understand "write fast and efficient C code for a prime sieve" and generate fairly impressive C code, I'm no longer sure I agree that we will never have enough software developers (after watching this, made me get just a little nervous as well as excited about the future of our industry):
I'm very curious about how this "AI writing code" thing is going to turn out.
I have not tested AI myself for producing code, but I toy a little (ok, more than a little) with various GPT instances to write prose. Sometimes it's great, sometimes it's poor, but:
1/ It never gets anywhere: there is never any resolution
2/ Sometimes it just loops, takes up a clue from itself and produces the same set of words indefinitely.
What I do is generate, survey, and then edit. It's a great tool to get new ideas. But how could this work for code that's supposed to accomplish something?
Code is famously much harder to read than to write; that's why people always prefer rewriting than refactoring. With code-generating AI, all that's left for humans to do is the reading, understanding, and monitoring parts.
It's a difficult job, and if done by incompetent youngsters, I think pretty dangerous too.
What OpenAI does is regurgitate stuff it's seen on the internet.
These things are basically a glorified hashtable (with compression).
Much like what a google query does when it leads you to a chunk of code in stack overflow.
Luckily, there's much more to what SWE does, and it's high time people stop believing that AI is at the level where it can do the job of a SWE, it's ridiculous.
Or to put it in a way that's perhaps more clear: Go ask OpenAI to rewrite to GUI of FreeCAD to be usable, see what it comes up with.
My first initial reaction was: did the computer write that code, or is it cribbing code from Stack Overflow? While the latter is problematic for developers who do the same, it is not a problem for developers who have to write original code or where the code has to be verifiable. If it is problematic, it would also say a lot about what the industry has evolved into and would probably be why so many people are leaving it (even if AI could not take over, less experienced and less expensive developers could).
Or for that matter, how often in your career has the problem been an example problem that people use to showcase basic examples of their programming language.
Prime sieve is almost definitely in the top 10 of most written and published programs ever.
Interestingly enough at the end of the video he asks the AI to write some code that makes money. And it responds with "maybe try investing or not spending money". This is much more in line with the type of questions I get asked in my career and somehow I doubt I would still be working if I had answered the same way seriously.
This is a super trivial example of a very simple algorithm that doesn't have to interact with any other systems. It's almost certainly copied directly from the training data as well, which is fine for an educational toy example, but generally a violation of copyright.
I've been quite happy with Github Copilot but it is not remotely useful to a non-programmer.
https://youtu.be/k_EF42H2ZC0?t=187