Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have the same issue plus unnecessary refactorings (that break functionality). it doesn't matter if I write a whole paragraph in the chat or the prompt explaining I don't want it to change anything else apart from what is required to fulfill my very specific request. It will just go rogue and massacre the entirety of the file.



This has also been my biggest gripe with Gemini 2.5 Pro. While it is fantastic at one-shotting major new features, when wanting to make smaller iterative changes, it always does big refactors at the same time. I haven't found a way to change that behavior through changes in my prompts.

Claude 3.7 Sonnet is much more restrained and does smaller changes.


This exact problem is something I’m hoping to fix with a tool that parses the source to AST and then has the LLM write code to modify the AST (which you then run to get your changes) rather than output code directly.

I’ve started in a narrow niche of python/flask webapps and constrained to that stack for now, but if you’re interested I’ve just opened it for signups: https://codeplusequalsai.com

Would love feedback! Especially if you see promising results in not getting huge refactors out of small change requests!

(Edit: I also blogged about how the AST idea works in case you're just that curious: https://codeplusequalsai.com/static/blog/prompting_llms_to_m...)


Interesting idea. But LLMs are trained on vast amount of "code as text" and tiny fraction of "code as AST"; wouldn't that significantly hurt the result quality?


Thanks and yeah that is a concern; however I have been getting quite good results from this AST approach, at least for building medium-complexity webapps. On the other hand though, this wasn't always true...the only OpenAI model that really works well is o3 series. Older models do write AST code but fail to do a good job because of the exact issue you mention, I suspect!


Having the LLM modify the AST seems like a great idea. Constraining an LLM to only generate valid code would be super interesting too. Hope this works out!


Interesting, i started playing with ts-morph and neo4j to parse TypeScript codebases.

simonw has symbex which could be useful for you for python


Asking it explicitly once (not necessarily every new prompt in context) to keep output minimal and strive to do nothing more than it is told works for me.


Can't you just commit the relevant parts? The git index is made for this sort of thing.


It's not always trivial to find the relevant 5 line change in a diff of 200 lines...


Really? I haven't tried Gemini 2.5 yet, but my main complaint with Claude 3.7 is this exact behavior - creating 200+ line diffs when I asked it to fix one function.


This is generally controllable with prompting. I usually include something like, “be excessively cautious and conservative in refactoring, only implementing the desired changes” to avoid.


Where/how do you use it? I've only tried this model through GitHub Copilot in VS Code and I haven't experienced much changing of random things.


I've used it via Google's own AI studio and via my own library/program using the API and finally via Aider. All of them lead to the same outcome, large chunks of changes to a lot of unrelated things ("helpful" refactors that I didn't ask for) and tons of unnecessary comments everywhere (like those comments you ask junior devs to stop making). No amount of prompting seems to address either problems.


I have the exactly same issue using it with Aider.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: