How is it the wrong tool for the job? In this particular case it's excellent, it can help you find proper abstractions.. without them you wouldn't realize.
I kind of view this use case as enhanced code linters
Do you expect an incoming collapse of modern society?
That's the only case where LLM would be "not there anymore." Even if this current hype train dies completely, there will still businesses providing LLM interference, just far less new models. Thinking LLM would be "not there anymore" is even more delusional than thinking programmer as a job would cease to exist due to LLM.
> It only looks effective if you remove learning from the equation.
It's effective on things that would take months/years to learn, if someone could reasonably learn it on their own at all. I tried vibe coding a Java program as if I was pair programming with an AI, and I encountered some very "Java" issues that I would not have even had the opportunity to get experience in unless I was lucky enough to work on a Fortune 500 Java codebase.
AI doesn't work in a waterfall environment. You have to be able to rapidly iterate, sometimes in a matter of hours, without bias and/or emotional attachment.
> AI doesn't work in a waterfall environment. You have to be able to rapidly iterate, sometimes in a matter of hours, without bias and/or emotional attachment.
What do you mean? There is no difference between waterfall or agile in what you do during a few hours.
Not at all true. You just adopted the wrong model to partner with it. Think of yourself as an old school analyst vs a programmer.
Throw a big context window model like Gemini at it to document the architecture unless good documentation exists. Then use modify that document to drive development of new or modified code.
Many big waterfall projects already have a process for this - use the AI instead of marginally capable offshore developers.
Might not be the case for the senior devs on HN, but for most people in this industry, it's copy/pasting a jira ticket into a LLM, which generates some code that seems to work and a ton of useless comments, then pushing it on github without even looking at it once and then moving to the next ticket.
A form of coding by proxy, where the developer instructs (in English prose) an LLM software development agent (e.g. cursor IDE, aider) to write the code, with the defining attribute that the developer never reviews the code that was written.
I review my vibe code, even if it’s just skimming it for linter errors. But yeah, the meme is that people are apparently force pushing what ever gets spat out by an LLM without checking it.
Vibe coding is instructing AI to create/modify an application without yourself looking at or understanding the code. You just go by the "vibe" of how the running application behaves.
It only looks effective if you remove learning from the equation.
It's the wrong tool for the job, that's what it is.