Hacker News new | past | comments | ask | show | jobs | submit login

Boy it sure is useful that you can trust code you don’t understand to have clear variable names and comments for the LLM to key off of.



Clear variable names and comments aren't a requirement at all.

It sounds to me like you have a philosophical problem with LLMs, which is something I don't think we can debate in good faith. I can just share my experience, which is that they are excellent tools for this kind of thing. Obvious caveats apply - only a fool would rely entirely on an LLM without giving it any thought of their own.


I don’t have a philosophical problem with LLMs. I have a problem with treating LLMs as something other than what they are: Predictive text generators. There’s no understanding beneath that which informs the generation, just compression techniques that arise as part of training. Thus I wouldn’t trust them for anything except churning out plausibly-structured text.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: