Hacker News new | past | comments | ask | show | jobs | submit login

I don't know why I can't seem to get on the LLM hype train everyone is so ready to jump on. I either lack vision or see it for what it is, a next-gen search engine. And don't get me wrong that's amazing, but not world-shattering as people want to believe.



It seems like it can be transformative organizationally at some point, but for individuals I agree I don't see what people are getting. I seem to be able to pretty much do the things I want to do and achieve the things I want to achieve in terms of sheer information discovery and generation. The qualities that might improve my life are things like better relationships with my family, finally having a child and ensuring it's healthy and happy, my wife somehow having her alcoholism cured, possibly overcoming certain physical limitations related to aging and injuries. I don't feel like any meaningful life outcomes are currently being impeded by an inability to generate code fast enough or find information I need from the Internet or the fact that my queries and commands to computing systems largely have to follow a pre-defined structure.

Legitimate advances in cheap robotics, on the other hand, could maybe make a dent. It feels to me like the things that might make a difference in my life are help with physical labor, not intellectual labor. A nanny, a maid, a driver, that I don't need to pay a salary to.


It's world-shattering for people who have never had a good way to manage their personal knowledge base. Personally, I just let my mind organize my thoughts. If something's important I'll remember it, if it's not then I don't. Obviously I write stuff down where needed but that's more to help commit it to memory, not because I'm ever going to read these notes again.


This resonates with me because personally I don't see a lot of use for my (dev) workflows. Was talking with my SO recently on this topic, they are close to finishing a PhD and do scientific research. For their workflows, consuming a lot of written information and producing a lot of written information, a case could be made it is more useful.


yeah, as a developer I've yet to see any development tasks that an LLM would be useful for, and it's wasting resources that could be going into abstract reasoning about software, or really doing anything correctly/accurately...


Conversely, I routinely use LLMs to do development tasks now. My last bit of greenfield work, I had an LLM stub out an entire API for it. When I needed to marshal a fairly complex JSON object into a well-defined object in my code, including several validations, due to some security controls, the LLM taught me about a library I wasn't familiar with in this language.

Or, a few days ago, I had the need to run the same CLI commands a few dozen times with slightly differing parameters. Unfortunately, the CLI only exist in Windows, and needed to be called on multiple hosts in a Windows environment. I could probably do this on Linux using bash and/or Python, but in Windows, it was way easier to just have the LLM write a PowerShell script for me.

LLMs aren't the best at dealing with proprietary codebases (yet), but I'm sure that will come. In the meantime, they're really useful for abstracting away mundane work in a way that's much more user-friendly than your IDE probably offers, and they often help me spot issues with my assumptions, as well.


hmm, is the powershell example something you don't expect to have to do again, so it's not worth really understanding the details? (and did you feel you needed to verify the output, or was just running it enough? Not trying to judge here! Just curious because nngroup just published a user study that showed only a tiny percentage of people actually cross-checking LLM-assistant output - not that it was wrong, they weren't checking either, just that most of the users in their study didn't feel it was necessary.)


It's something which probably needs to be done again -- I included a few parameters in the script that allow it to be used for a limited set of similar workflows in the future. I read over the script, didn't spend loads of time understanding it, but the logic looked roughly correct. I generally read the code the LLM generates, then test it. I also try to include dry run modes whenever possible, so I can validate the mutating behaviour is correct before actually running any mutating commands -- I'm far more likely to do this with LLM-generated scripts than my own code, I've found, though perhaps that's just laziness. :)


There are a ton of development tasks that I do infrequently enough that I have to google how to do them every time, because I forget how to do them.

For those things, I can just ask Chat-GPT to write the first draft of it, and it saves me about 80% of the time. I always have to end up doing a few edits, but it works out.

Also dropping in an indecipherable page of logs and immediately getting the source of an error with at least a suggestion of a direction is really useful.


But what if you enjoy doing those things by hand?


Nothing wrong with that. I don't, though -- so LLMs have made me more productive with less annoyance.

(My personal struggle is figuring out when to stop trying to use the hammer that is LLMs. I've definitely fallen victim to the sunk cost fallacy here.)


as a developer, I can see a lot of tasks that an LLM would be useful for. In a large organization, new people get onboarded all the time, and generally they have the same exact questions, like 'how do i do a web request out of the proxy?' and 'how can I do multithreading' or 'how do i use library x for our use case y'? whereas an LLM can look at the code used throughout the business and offer suggestions like "hey, I see you're trying to make an https connection to an external site without using a proxy, try this instead". Getting enterprise code up to the enterprise standard, whatever that is, can be very useful. Personally, I'd love to go in depth with an AI on crazy ideas I have for code, and work through a lot of ideas before I implement something. For example, I might want to try different ways to accomplish the same task. I could write my version of it, and instead of rewriting it to try a different way, I might ask an LLM to rewrite it that other way, so I can check which way is better.


Those all sound like great things to ask of an assistant tool - that ChatGPT isn't actually capable of answering. (I have lots of those too - things like "review this code" or even better and more specific, "given the description in this CVE, do we have any code that does this sort of thing too, that we should examine"? Fortunately for the perceived level of honesty in the field, noone appears to be claiming an LLM can do either of them.)


So basically what the OP said, a next-gen search engine - trained on your code (rarely) or on SO (usually).


Given some of the applications I've seen it used for (write and respond to emails; write code for you, and not just boilerplate), I'm more concerned about LLMs dumbing us down over time.


same we had chatbots and voice search for years. Even currently there was the lawyer and open ai made up complete nonsense and that doesn't appear unique to just that instance. We're still required to validate information.

I have used it to help write some documents and get ideas, but I feel like there should be more than help writing some text.


It is to a search engine what a search engine is to physical libraries.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: