Hacker News new | past | comments | ask | show | jobs | submit login

This is just fundamentally not the case most of the time. LLMs guess where you're going, but so often what they produce is a "similar looking" non sequitur relative to the lines above it. It guesses, and sometimes that guess is good, but as often, or more, it's not.

The suggestion "think in interfaces" is fine; if you spell out enough context in comments, the LLM may be able to guess more accurately, but in spelling out that much context for it, you've likely already done the mental exercise of the implementation.

Also baffled by "wrong or suboptimal," I don't think I've ever seen an LLM come up with a better solution.






Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: