Hacker News new | past | comments | ask | show | jobs | submit login

You know, the point of AI in coding seems to be so that you can code in English, instead of a formal language. (And for some reason we're pretending like these formalisms are hard and the people that understand them evil gatekeepers, to which I'd say: nope to both propositions. It has never been easier to learn how to code.)

The thing I don't understand is why anyone thinks this is an improvement. I think anyone that's written code knows that writing code is a lot more fun than reading code, but for some reason we're delegating to the AI the actual enjoyable task and turning ourselves into glorified code reviewers. Except we don't actually review the code, it seems, we just go on with bug ridden monstrosities.

I fail to see why anyone would want this to be the future of code. I don't want my job to be reviewing LLM slop to find hallucinations and security vulnerabilities, only to try to tweak the 20,000 word salad prompt.




> I fail to see why anyone would want this to be the future of code.

IME there's an inverse relationship between how excited the person is about AI coding and their seniority as an engineer.

Juniors (and non-coders) love it because now they can suddenly do things they didn't know how to do before.

Seniors don't love it because it gets in their way, causes coworkers to put up low-quality AI-generated code for peer review, and struggles with any level of complexity or nuance.

My fear is that AI code assistants will inadvertently stop people from progressing from Junior --> Senior since it's easier to put out work without really understanding what you're doing. Although I guess I could have said the same thing about Stack Overflow 10 years ago.


It's a very different thing from SO, in my opinion, because SO involves human discussion. One of the most valuable, educational things about using SO is the comments underneath some answers saying that it's a bad idea, and explaining why. This critical reflection is completely missing from LLM coding. Good suggestion, horrible suggestion – there's no distinction that would be noticed by someone not already experienced in the topic.


I think the key to understanding why people want this is that those people care about results more than the act of coding. The easy example for this is a corporation. If the software does what was said on the product pitch, it doesn’t matter if the developer had fun writing it. All that matters is that it was done in an efficient enough (either by money or time) manner.

A slightly less bleak example is data analysis. When I am analyzing some dataset for work or home, being able to skip over the “rote” parts of the work is invaluable. Examples off the top of my head being: when the data isn’t in quite the right structure, or I want to add a new element to a plot that’s not trivial. It still has to be done with discipline and in a way that you can be confident in the results. I’ll generally lock down each code generation to only doing small subproblems with clearly defined boundaries. That generally helps reduce hallucinations, makes it easier to write tests if applicable and makes it easier to audit the code myself.

All of that said, I want to make clear that I agree that your vision of software engineering Becoming LLM code review hell sounds like… well, hell. I’m in no way advocating that the software engineering industry should become that. Just wanted to throw in my two cents


If you care about the results you have to care about the craft, full stop.


Probably the most unfortunate thing is that the whole AI garbage trend exposes how little people care about the craft leading to garbage results.

As a comparison point I've gone through over 12,000 games on Steam. I've seen endless games where large portions of it are LLM generated. Images, code, models, banner artwork, writing. None of it is worth engaging with because every single one has a bunch of disjointed pieces shoved together combined.

Codebases are going to be exactly the same. A bunch of different components and services put together with zero design principal or cohesion in mind.


> (And for some reason we're pretending like these formalisms are hard and the people that understand them evil gatekeepers, to which I'd say: nope to both propositions. It has never been easier to learn how to code.)

I am not a professional programmer, and I tend to bounce between lots of different languages depending on the problem I'm trying to solve.

If I had all of the syntax for a given language memorized, I can imagine how an LLM might not save me that much time. (It would still be helpful for e.g. tracking down bugs, because I can describe a problem and e.g. ask the AI to take a first pass through the codebase and give me an idea of where to look.)

However, I don't have the syntax memorized! Give me some Python code and I can probably read it, but ask me to write some code from scratch, and LLMs I would have needed to dive into the language documentation or search Stack Overflow. LLMs, and Claude Code in particular, have probably 10x'd what I am capable of, because I can describe the function I want and have the machine figure out the minutia of syntax. Afterwards, I can read what the produced and either (A) ask it to change something specific or (B) edit the code by hand.

I also do find writing code to be less enjoyable than reading/editing code, for the reason described above.


No one have the language syntax memorized, unless you're working with the language daily. Instead we store patterns, and there isn't a lot (checkout the formal grammar for any programming language). For any C like language, they overlap a lot, the difference are mostly in syntax minutia (which we can refresh in an afternoon with a reference) and the higher abstractions (which you learn once, like OOP, pattern matching).

Generally you spend 80% of the time wrangling abstractions, especially in mature project. The coding part is often a quick mental break where you just doing translation. Checking the syntax is often a quick action that no one mind.


> Instead we store patterns, and there isn't a lot

That's kind of what I mean by "syntax". For example, "how do I find a value that matches X in this specific type of data structure?" AI is very good at this and it's a huge time saver for me. But I can imagine how it might be less helpful if I did this full time.


That's a good workflow. But in this case I just have a somewhat complete book about the language, a book and some web search open. Because I often seek for more complete information without the need for prompting and checking if the information is correct.


I mean, I'm not a mathematician but I don't expect that I should be able to write a proof.

You talk of memorizing syntax like its a challenge but excluding a small number of advanced languages no programmer thinks syntax is hard. But if you don't understand the basics how can you expect to be able to understand if the solution an LLM presents is decent and not rife with bugs (security and otherwise)

I guess my issue is people are confusing a shortcut with actually being able to do the thing. If you can't remember syntax I don't really want your code anywhere I care about


I personally find debugging and fixing code to be the most rewarding.

That way you know you're (usually) strictly making an improvement.


I know I'm making an improvement by features I've shipped to customers. Bugs prevent me from shipping more features until they're fixed. I do not enjoy bugs.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: