Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is not just for LLM code. This is for any code that is written by anyone except yourself. A new engineer at Google, for example, cannot hit the ground running and make significant changes to the Google algorithm without months of "comprehension debt" to pay off.

However, code that is well-designed by humans tends to be easier to understand than LLM spaghetti.



>However, code that is well-designed by humans tends to be easier to understand than LLM spaghetti.

Additionally you may have institutional knowledge accessible. I can ask a human and they can explain what they did. I can ask an LLM, too and they will give me a plausible-sounding explanation of what they did.


I can't speak for others, but if you ask me about code I wrote >6 months ago, you'll also be stuck with a plausible-sounding explanation. I'll have a better answer than the LLM, but it will be because I am better at generating plausible-sounding explanations for my behavior, not because I can remember my thought processes for months.


This is where stuff like git history often comes in handy. I cannot always reliably explain why some code was the way it is when looking at a single diff of my own from years ago, but give me the history of that file and the issue tracker where I can look up references from commits and see the comments etc, and I can reconstruct it with very high degree of certainty.


Your sphere of plausibility is smaller than that of an LLM though, at least. You'll have some context and experience to go on.


You also might say ”I don’t remember”, which ranks below remembering, but above making something up.


There might also be a high level design page about the feature, or jira tickets you can find through git commit messages, or an architectural decision record that this new engineer could look over even if you forgot. The LLM doesn't have that


> The LLM doesn't have that

The weights won't have that by default, true, that's not how they were built.

But if you're a developer and can program things, there is nothing stopping you from letting LLMs have access to those details, if you feel like that's missing.

I guess that's why they call LLMs "programmable weights", you can definitely add a bunch of context to the context so they can use it when needed.


>But for asking a clarifying question during a training class?

LLMs can barely do 2+2, humans don't even understand the weights if they see them. LLMs can have all the access they want to their own weights and they won't be able to explain their thinking.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: