Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One AI workflow I rather like seems to have largely vanished from many modern tools. Use a very dumb simple model with syntax knowledge to autocomplete. It fills out what I'm about to type, and takes local variables and pass them to functions I wanna call.

It feels like just writing my own code but at 50% higher wpm. Especially if I can limit it to only suggest a single row; it prevents it from effecting my thought process or approach.

This is how the original GitHub copilot worked until it switched to a chat based more agentic behavior. I set it up locally with an old llama on my laptop and it's plenty useful for bash and c, and amazing for python. I ideally want a model trained only for code and not conversational at all, closer to the raw model trained to next-token predict on code.

I think this style just doesn't chew enough tokens to make tech CEOs happy. It doesn't benefit from a massive model and almost drains more networking than compute to run in the cloud.





Most editors and LSPs offer variable, method, keyword and a bunch of other completions that are 100% predictable and accurate, you don't need an LLM for this.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: