That hypothesis is easily disproven by spending an afternoon on a side project with Copilot.
No matter how interesting your problem is, translating it into code is going to involve a lot of grunt work. This isn’t just boilerplate, but also the large portion of your code which is going to be gluing things together.
The time you spend working through those menial parts of your code is time when the context of the interesting part of the problem fades. Once you get the mechanical stuff out of the way, you have to load the interesting stuff back into your brain.
This is where AI coding tools really shine. They dramatically reduce the intervals between when you can think about the actual problem you’re solving by letting you get the boring mechanics out of the way more quickly.
I'm very curious to see some examples where Copilot autocompleted something truly useful and saved you time - and that also disproves my hypothesis that you are doing something boring or with the wrong tools/languages/frameworks. Things that a non-ML autocomplete could do don't count.
I can give you an example of an entire (well, I still consider it alpha) library I wrote several months ago, using Copilot: https://github.com/osuushi/triangulate
This is an implementation of a 1991 paper on polygon triangulation into Go. So the deepest thinking about how to solve the problem was obviously already done for me, but there were a number of edge cases that I had to invent my own solutions to, and the translation itself involved keeping a lot of context in my head.
I can’t tell you in precise detail what Copilot did, and what I wrote by hand. I wasn’t taking notes or recording my screen. But there’s a reason you don’t see a lot of blocks in there where I forgot to comment anything, because my entire process for this was “type what I want to do in English, and see if Copilot will generate the next snippet, or something close”. I didn’t do this out of bloodyminded dedication to the AI cause, but because it continued to be an extremely effective way to get the code written quickly.
I can give a few specifics:
- My linear algebra is rusty, and Copilot was extremely helpful here. I would often just type the basic thing I was trying to do in pretty vague linear algebra terms, and it would generate the formula.
- I wrote a lot of tests like this https://github.com/osuushi/triangulate/blob/main/internal/sp.... This is a minor thing, but those aren’t copy-pasted. Instead, I would write the first test, and for the most part, I could just type something like `func TestConvertToMonotones_SquareWithHole`, and it would figure out how to adapt the previous test automatically.
- It generates exactly the error strings I want based on context an enormous percentage of the time.
I want to stress that I’m just giving a few examples of things that I specifically remember because I talked about them at the time, not characterizing the majority of the experience of using Copilot. The majority of the experience of using Copilot is that you write comments, and then the things you were about to type appear on the screen before you have to type them.
When I find myself writing comments of this style I see, I usually ask myself if this thing would be better extracted into a function. These comments are primarily stating the obvious.
If I find myself writing a 200 line function with nested or repetitive loops I expect to hear from colleagues about how I should refactor it.
I feel that the solution to writing boring, repetitive boilerplate shouldn’t be to automate writing more of it, but to reduce or remove it entirely. Seeing things like this just reinforces my preconception that Copilot acts in low quality code environments to produce fittingly low quality code, or with languages like Java where the language is married to boilerplate.
No matter how interesting your problem is, translating it into code is going to involve a lot of grunt work. This isn’t just boilerplate, but also the large portion of your code which is going to be gluing things together.
The time you spend working through those menial parts of your code is time when the context of the interesting part of the problem fades. Once you get the mechanical stuff out of the way, you have to load the interesting stuff back into your brain.
This is where AI coding tools really shine. They dramatically reduce the intervals between when you can think about the actual problem you’re solving by letting you get the boring mechanics out of the way more quickly.