Hacker News new | past | comments | ask | show | jobs | submit login

Well, the context is the problem. LLMs will really become useful if they 1.) understand the WHOLE codebase AND all it's context and THEN also understand the changes over time to it (local history and git history) and finally also use context from slack - and all of that updating basically in real time.

That will be scary. Until then, it's basically just a better autocomplete for any competent developer.




What you describe would be needed for a fully autonomous system. But for a copilot sort of situation, the LLM doesn't need to understand and know of _everything_. When I implement a feature into a codebase, my mental model doesn't include everything that has ever been done to that codebase, but a somewhat narrow window, just wide enough to solve the issue at hand (unless it's some massive codebase wide refactor or component integration, but even then it's usually broken down into smaller chunks with clear interfaces and abstractions).


I use copilot daily and because it lacks context it's mostly useless except for generating boilerplate and sometimes converting small things from A to B. Oh, also copying functions from stackoverflow and naming them right.

That's about it. But I spend maybe 5% of my time per day on those.


I dislike Copilot's context management, personally, and much prefer populating the context of say Claude deliberately and manually (using Zed, see https://zed.dev/blog/zed-ai). This fits my workflow much much better.


Imagine you are coding in your IDE and it suggests you a feature because someone mentioned it yesterday on #app-eng channel. Needs deeper context, though. About order of events, an how authoritative a character is.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: