Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I believe LLMs are definitely able to, in some vague sense, understand the code they’re working with. If they’re able to produce meaningful output from the input, especially in abstract problems, they get the gist of what’s being coded, not just the tokens. That’s what they’re storing in their inner representations: higher-level view of the text they’re processing.



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: