Hacker News new | past | comments | ask | show | jobs | submit login

> you can have LLMs create reasonable code changes, with automatic review / iteration etc.

Nobody who takes code health and sustainability seriously wants to hear this. You absolutely do not want to be in a position where something breaks, but your last 50 commits were all written and reviewed by an LLM. Now you have to go back and review them all with human eyes just to get a handle on how things broke, while customers suffer. At this scale, it's an effort multiplier, not an effort reducer.

It's still good for generating little bits of boilerplate, though.




If the last 50 commits were reviewed by an AI and it took that long for an issue to happen I’d immediately mandate all PR’s are reviewed by an AI.


There's a difference between an issue being introduced and being noticed.


Yeah, but if our current incidence rate is 1 per 5 and it suddenly goes down to 1 in 50, that’s a major improvement.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: