This resonates with me because personally I don't see a lot of use for my (dev) workflows. Was talking with my SO recently on this topic, they are close to finishing a PhD and do scientific research. For their workflows, consuming a lot of written information and producing a lot of written information, a case could be made it is more useful.
yeah, as a developer I've yet to see any development tasks that an LLM would be useful for, and it's wasting resources that could be going into abstract reasoning about software, or really doing anything correctly/accurately...
Conversely, I routinely use LLMs to do development tasks now. My last bit of greenfield work, I had an LLM stub out an entire API for it. When I needed to marshal a fairly complex JSON object into a well-defined object in my code, including several validations, due to some security controls, the LLM taught me about a library I wasn't familiar with in this language.
Or, a few days ago, I had the need to run the same CLI commands a few dozen times with slightly differing parameters. Unfortunately, the CLI only exist in Windows, and needed to be called on multiple hosts in a Windows environment. I could probably do this on Linux using bash and/or Python, but in Windows, it was way easier to just have the LLM write a PowerShell script for me.
LLMs aren't the best at dealing with proprietary codebases (yet), but I'm sure that will come. In the meantime, they're really useful for abstracting away mundane work in a way that's much more user-friendly than your IDE probably offers, and they often help me spot issues with my assumptions, as well.
hmm, is the powershell example something you don't expect to have to do again, so it's not worth really understanding the details? (and did you feel you needed to verify the output, or was just running it enough? Not trying to judge here! Just curious because nngroup just published a user study that showed only a tiny percentage of people actually cross-checking LLM-assistant output - not that it was wrong, they weren't checking either, just that most of the users in their study didn't feel it was necessary.)
It's something which probably needs to be done again -- I included a few parameters in the script that allow it to be used for a limited set of similar workflows in the future. I read over the script, didn't spend loads of time understanding it, but the logic looked roughly correct. I generally read the code the LLM generates, then test it. I also try to include dry run modes whenever possible, so I can validate the mutating behaviour is correct before actually running any mutating commands -- I'm far more likely to do this with LLM-generated scripts than my own code, I've found, though perhaps that's just laziness. :)
There are a ton of development tasks that I do infrequently enough that I have to google how to do them every time, because I forget how to do them.
For those things, I can just ask Chat-GPT to write the first draft of it, and it saves me about 80% of the time. I always have to end up doing a few edits, but it works out.
Also dropping in an indecipherable page of logs and immediately getting the source of an error with at least a suggestion of a direction is really useful.
Nothing wrong with that. I don't, though -- so LLMs have made me more productive with less annoyance.
(My personal struggle is figuring out when to stop trying to use the hammer that is LLMs. I've definitely fallen victim to the sunk cost fallacy here.)
as a developer, I can see a lot of tasks that an LLM would be useful for. In a large organization, new people get onboarded all the time, and generally they have the same exact questions, like 'how do i do a web request out of the proxy?' and 'how can I do multithreading' or 'how do i use library x for our use case y'? whereas an LLM can look at the code used throughout the business and offer suggestions like "hey, I see you're trying to make an https connection to an external site without using a proxy, try this instead". Getting enterprise code up to the enterprise standard, whatever that is, can be very useful. Personally, I'd love to go in depth with an AI on crazy ideas I have for code, and work through a lot of ideas before I implement something. For example, I might want to try different ways to accomplish the same task. I could write my version of it, and instead of rewriting it to try a different way, I might ask an LLM to rewrite it that other way, so I can check which way is better.
Those all sound like great things to ask of an assistant tool - that ChatGPT isn't actually capable of answering. (I have lots of those too - things like "review this code" or even better and more specific, "given the description in this CVE, do we have any code that does this sort of thing too, that we should examine"? Fortunately for the perceived level of honesty in the field, noone appears to be claiming an LLM can do either of them.)