If you worry about "staying ahead of generative AI" in it's current state, then I think you are not a good coder and you should learn more instead of worrying about that.
LLMs are only good in writing new code without surrounding context. They are pretty useless in legacy codebases and in codebases with a lot of internal solutions. I've used Copilot for 2 months at work and maybe 10% of suggestions were useful, and from that 10% maybe 10% did not contain bugs.
> then I think you are not a good coder and you should learn more instead of worrying about that
I am not sure if its that simple and/or so black and white. Everyone is bad when they start, and even stay okay for a while. So fear is very rational, the fear of getting replaced by someone or something better is very humble. No matter how good or bad one is, theres always someone better than them.
I think for most people its smart to adapt to using AI in their workflows to make them better and more efficient, so I think everyone benefits from learning no matter what skill level they are on.
> I think for most people its smart to adapt to using AI in their workflows to make them better and more efficient
It probably is smart to try out and test everything for a while to see if it is an actual improvement or not.
What I have a serious problem with is the proposal that this now needs to be part of a workflow when it actually doesn't improve anything.
Generative AI in its current form may be helpful in some cases and unhelpful in others. Plenty of examples are mentioned in the context of the other comments.
I agree that the parent statement "then I think you are not a good coder" is a somewhat dangerous overgeneralization.
> What I have a serious problem with is the proposal that this now needs to be part of a workflow when it actually doesn't improve anything.
Yes, forcing it in the workflow might be bad for personal growth and overall culture.
I think in any form using it alongside you workflow is helping, it saves a lot of time and also helps decreasing cognitive load as one can forget about the commonly used code snippets and boilerplate code and focus on important aspects of the code.
> I think in any form using it alongside you workflow is helping
No, it is not. It can confuse and mislead you which wastes a lot of time. I lost multiple hours on different occasions figuring out subtle mistakes that it had made. It's sometimes harder (and slower) to understand someone else's code than writing the code yourself completely from scratch.
Also, if you aren't working alone, be prepared to answer code review questions on code that you haven't written. GPT is not going to take any responsibility for what it outputs. It often begins its answers to review questions with "Apologies for the oversight" followed by a revised version of the previous output.
The people I work with are used to me providing PRs that don't contain stupid mistakes. So in order to guarantee for that, I usually have to do a full blown quality control on every GPT output that I use. It can still be a time saver, but not really a significant one usually. I am still learning how to distinguish the cases in which it is not even a good idea to involve it and when it can be somewhat trusted. Seems to be highly dependent on the amount of training data in the particular problem domain and programming language.
I didn't mean that using it alongside means blindly trusting it.
I think its far quick to read the code than write those 15 lines of code generally, especially for those type of code snippets. It also is a less stressful and takes very little mental energy to do so (if you already are familiar with the language and codebase)
> I am still learning how to distinguish the cases in which it is not even a good idea to involve it and when it can be somewhat trusted.
Interesting, can comments of that code block being generated by some AI tool be more helpful in your case? Sure it generally isn't' that nuanced and mostly isn't in isolation but labeling the major parts of code like generated data structures, generated functions might be easier to deal with.
LLMs are only good in writing new code without surrounding context. They are pretty useless in legacy codebases and in codebases with a lot of internal solutions. I've used Copilot for 2 months at work and maybe 10% of suggestions were useful, and from that 10% maybe 10% did not contain bugs.