Hacker News new | past | comments | ask | show | jobs | submit login

> It’s an approach similar to how I’ve dealt with junior devs in the past.

Your experience with and approach to juniors is different than my own. I don’t ask juniors to write a single class file; I give them a design document, an API spec, and documentation for standard practices, then I work as closely with them as needed to get the results I need and the experience they need. This approach works well for me and the vast majority of juniors with whom I’ve worked because we can pretty quickly identify gaps in their knowledge so we can then provide experience and education that benefits both parties. The same approach has failed miserably for me when pairing with an LLM for anything other than trivial code-generation tasks. The models don’t learn (sure, some providers offer “memory” but the limits of those features are pretty obvious once you try to use ‘em in practice for anything other than “don’t forget that I like tacos, the color blue, and sci-fi”)

> For sanity’s sake, I keep these AI generated modules in single files just so it’s an easy copy and paste into ChatGPT.

That’s not acceptable for production-quality code—at least not in my environment.




I ask juniors to do just as you do. There's a contract specified in the API spec and relevant documents. In kind, I build up an "AI" package one module at a time by specifying contracts for each module. The principle remains the same but at a smaller scale because, as you say, LLMs don't learn.

I know the organizational style doesn't fit a typical "production" set up, but the reality is the code produced is very good. I only set it up this way so I guarantee I can continue iterating on a module without too much pain.

Also, who cares if I have a way more files if I'm still building features for my customers?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: