Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Software has been automating itself for 75 years. Automation cannibalizing itself forever. And yet there are more programmers than ever. AI will just expand the amount of software or create whole new fields on top.



I guess there is more complexity than ever too?

I feel like the early days of programming might've been actually easier to automate. There was often well designed specs, cleaner design and architectural patterns to follow etc. Not in all cases of course however now, I'm seeing people hoping to god that ChatGPT can write a client for the worlds shittiest XML API because no one else sure as hell wants to do it.

So what happens now? We just generate more shit and then use moar AI (tm) to work with moar shit (tm), and evolve that pattern fast and faster?

Yesterday I was using co-pilot and it was suggesting a bunch of obvious auto-completes etc, which is fine, that's why I use it, then it hit me, most people are probably going to start to not bother with libraries, clean APIs and specs because soon you will be just brute forcing your way to a solution using text to code and on we go. Maybe the answer will really just be to keep evolving generative coding AIs to keep up with it, let's see how it goes.


People using AI to generate crap faster is definitely a (short term) risk. And, BTW, not only with code. I read an article stating that in a few years (2-3, can't remember) 80% of content only will be generated by AI. Which would be a disaster. We're already swimming in low quality information and this will only make it worse.

OTOH, so much for the idea that people will guide the hand of AI to create better code. As this will unfold, there will be ever more incentives to remove (most) humans from the loop. And if past can be used to predict the future, what we have seen so far is that when AI gets reasonably competent at something it will gain superhuman capabilities very quickly. I still keep bringing up how people thought that after alpha go beat Fan Hui (European go champion) it would have taken a huge leap for it to beat Lee Sedol. Because Lee Sedol was so much better than Fan Hui. At least in human terms. It only took a few months of training and tinkering for DeepMind.

Speaking of loops: since AIs are taught from the internet (simple language models or coding specialists), we're creating an unwanted feedback loop here. More AI generated content will likely make teaching future AIs harder. At least with information harvested from the internet after ~2022-ish.


There is another way, not all AI training data must be sourced from humans. You can loop a LLM with a compiler and set of tests to run, and it will happily search for solutions on its own. Or it could be connected to a simulator, a game or any real life process and let it learn to optimise a goal that can be measured without human work.


Sure, most software developers prefer to automate as much of our work as possible. Starting with compilers, build systems, reusable software (libraries, frameworks), etc.

But these are fundamentally different from AI. As I mentioned above, Fred Brooks in his essay (The Mythical Man-month) would argue that all these just decrease the accidental complexity. The incidental complexity still remains. I.e. taking a vague description, an idea, something embedded in the heads of the stakeholders and turning it into some kind of actual computer code, finding and removing inconsistencies and with a minimal number of bugs. Now AI will be able to do it one day. And it seems that day is not that far away in the future.

The hard move is going from a vague set of incomplete requirements (which is always the case) to specific, executable code. We never had a tool that can do this transformation. Until now(ish).




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: