Hacker News new | past | comments | ask | show | jobs | submit login
At Amazon, some coders say their jobs have begun to resemble warehouse work (nytimes.com)
580 points by milkshakes 17 days ago | hide | past | favorite | 867 comments




> [Harper Reed] cautioned against being overly precious about the value of deeply understanding one’s code, which is no longer necessary to ensure that it works.

That just strikes me as an odd thing to say. I’m convinced that this is the dividing line between today’s software engineers and tomorrow’s AI engineers (in whatever form that takes - prompt, vibe, etc.) Reed’s statement feels very much like a justification of “if it compiles, ship it!”

> “It would be crazy if in an auto factory people were measuring to make sure every angle is correct,” he said, since machines now do the work. “It’s not as important as when it was group of ten people pounding out the metal.”

Except that the machines doing that work aren’t regularly hallucinating angles, spurious welding joints, etc.


Also, you know who did measure every angle to make sure it was correct? The engineers who put together the initial design. They sure as hell took their time getting every detail of the design right before it ever made it to the assembly line.

Who's filling that role in this brave new world?


They took their time precisely specifying the robot movements so they're always repeatable. Almost like...some sort of carefully crafted code.


You misconstrue the analogy. The robot isn’t equivalent to the code in this analogy. It’s the thing that generates the code.

The robot operates deterministically, it has a fixed input and a fixed output. This is what makes it reliable.

Your “AI coder” is nothing like that. It’s non deterministic on its best day, and it gets everything thrown at it so even more of a coin toss. This seriously undermines any expectation of reliability.

The guy’s comparison shows a lack of understanding of either of the systems.


> The robot isn’t equivalent to the code in this analogy. It’s the thing that generates the code.

I think this inversion is what a lot of people are missing, or just don't understand (because they don't understand what code is or how it works).


I totally understand that inversion but I think it's a bad analogy.

Industrial automation works by taking a rigorously specified designs developed by engineers and combining it with rigorous quality control processes to ensure the inputs and outputs remains within tolerances. You first have to have a rigorous spec, then you can design a process for manufacturing a lot of widgets while checking 1 out of every 100 of them for their tolerances.

You can only get away with not measuring a given angle on widget #13525 because you're producing many copies of exactly the same thing and you measured that angle on widget #13500 and widget #13400 and so on and the variance in your sampled widgets is within the tolerances specified by the engineer who designed the widget.

There's no equivalent to the design stage or to the QC stage in the vibe-coding process advocated for by the person quoted above.


Yea I think we're saying the same thing


That was exactly their point.


> The robot isn’t equivalent to the code in this analogy

I never said it is. The code is the code that controls the robot and makes it behave deterministically.


Except the code it creates is deterministic.


I don't know what you mean with "the code it creates is deterministic" but the process an LLM uses to generate code based on an input is definitely not entirely deterministic.

To put it simply, the chances that an LLM will output the same result every time given the same input is low. The LLM does not operate deterministically, unlike the manufacturing robot who will output the same door panel every single time. Or as ChatGPT put it:

> The likelihood of an LLM like ChatGPT generating the exact same code for the same prompt multiple times is generally low.


For any given seed value, the output of an LLM will be identical- it is deterministic. You can try this at home with Llama.cpp by specifying a seed value when you load a LLM, and then seeing that for a given input the output will always be the same. Of course there may be some exceptions (cosmic ray bit flips). Also, if you are only using online models, you can't set the seed value, plus there are multiple models, so multiple seeds. In summary, LLMs are deterministic.


> the process an LLM uses to generate code based on an input is definitely not entirely deterministic

Technically correct is the least useful kind of correct when it's wrong in practice. And in practice the process AI coding tools use to generate code is not deterministic which is what matters. To make matters worse in the comparison with a manufacturing robot, even the input is never the same. While a robot get the exact command for a specific motion and the exact same piece of sheet metal, in the same position, a coding AI is asked to work with varied inputs and on varied pieces of code.

Even stamping metal could be called "non-deterministic" since there are guaranteed variations, just within determined tolerances. Does anyone define tolerances for generated code?

That's why the comparison shows a lack of understanding of either of the systems.


I don't really understand your point. An LLM is loaded with a seed value, which is a number. The number may be chosen through some pseudo- or random process, or specified manually. For any given seed value, say 80085, the LLM will always and exactly generate the same tokens. It is not like stamped sheet metal, because it is digital information not matter. Say you load up R1, and give it a seed value of 80085, then say "hi" to the model. The model will output the exact same response, to the bit, same letters, same words, same punctuation, same order. Deterministic. There is no way you can say that an LLM is non-deterministic, because that would be WRONG.


WRONG lol.

First you're assuming a brand new conversation: no context. Second you're assuming a local-first LLM because a remote one could change behavior at any time. Third, the way the input is expressed is inexact, so minor differences in input can have an effect. Fourth, if the data to be operated on has changed you will be using new parts of the model that were never previously used.

But I understand how nuance is not as exciting as using the word WRONG in all caps.


Arguing with "people" on the internet... Nuance is definitely a word of the year, and if you look at many models you can actually see it's high probability.

Addressing your comment, there was no assumption or indication on my part that determinism only applies to a new "conversation". Any interactions with any LLM are deterministic, same conversation, for any seed value. Yes, I'm talking about local systems, because how are you going to know what is going on on a remote system? On a local system, a local LLM, if the input is expressed in the same way, the output will be generated in the same way, for all of the token context and so on. That means, for a seed value, after "hi", the model may say "hello", and then the human's response may be "how ya doin'", and then the model would say "so so , how ya doin?", and every single time, if the human or agent inputs the same tokens, the model will output the same tokens, for a given seed value. This is not really up for question, or in doubt or really anything to disagree about. Am I not being clear? You can ask your local LLM or remote LLM and they will certainly confirm that the process by which a language model generates is deterministic, by definition. Same input means same output, again I must mention that the exception is hardware bit flips, such as those caused by cosmic rays, and that's just to emphasize how very deterministic LLMs are. Of course, as you may know, online providers stage and mix LLMs, so for sure you are not going to be able to know that you are wrong by playing with chatgpt, grok/q, gemini, or whatever other only LLMs you are familiar with. If you have a system capable of offline or non-remote inference, you can see for yourself that you are wrong when you say that LLMs are non-deterministic.


I feel this is technically correct but intentionally cheating. no one - including the model creators - expects that to be the interface; it undermines they entire value proposition of using an LLM in the first place if I need to engineer the inputs to ensure reproducability. I'd love to hear some real world scenarios that do this where it wouldn't be simpler to NOT use AI.


When should a model's output be deterministic? When should a model's output be non-deterministic?

When many humans interact with the same model, then maybe the model should try different seed values, and make measurements. When model interaction is limited to a single human, then maybe the model should try different seed values, and make measurements.


It's simple. You run the code generated by the llm you will get a deterministic result of said code.


LLMs almost always operate at least partly stochastically.


Of course but the code they generate only operates in one way.


Who determines if that code operates in the intended way?


This also seems, to me, like composer/npm issues.

An entire generation of devs, who grew up using unaudited, unverified, unknown license code. And which at a moments notice, can be sold to a threat actor.

And I've seen devs try to add packages to the project without even considering the source. Using forks of forks of forks, without considering the root project. Or examing if it's just a private fork, or what is most active and updated.

If you don't care about that code, why care about AI code? Or even your own?


After putting off learning JS for a decade, I finally bit the bullet since I can talk to an LLM about it while going through the slog of getting a mental model up and running.

After a month, I can say that the inmates run that whole ecosystem, from the language spec, to the interpreter, to packaging. And worse, the tools for everyone else have to cater to them.

I can see why someone who has never had a stable foundation to build a project on would view vibe coding as a good idea. When you're working in an ecosystem where any project can break at any time because some dependency pushed a breaking minor version bundled with a security fix for a catastrophic exploit, rolling the LLM gacha to see if it can get it working isn't the worst idea.


since you mention JS specifically, I think it's important to seperate that from the framework ecosystem. I'd suspect that most LLMs don't which is part of the problem. I had a similar experience with Python lately, where the LLM-generated code (once I could get it to run) resulted in code that I would generously evaluate as "Excel VBA Macro quality". It does the task - for now - but I didn't learn much about what production-grade python would look like.


Interview with a Senior JS Developer (satire)

https://www.youtube.com/watch?v=Uo3cL4nrGOk

(you've probably already seen it--everyone else has. But if not, you're in for a treat)


Cory Doctorow described this cart-before-the-horse arrangement as Reverse Centaurs[0][1]

[0] https://pluralistic.net/2022/04/17/revenge-of-the-chickenize...

[1] https://pluralistic.net/2024/08/02/despotism-on-demand/


This is an underrated comment. Who's job is it to do the thinking? I suppose it's still the software engineer, which means the job comes down to "code prompt engineer" and "test prompt engineer".


Wild times where a task that used to be described as "good at using google" now gets the title of "Engineer". It was bonkers enough when software devs co-opted the title.


Just wait to you hear what some traditional capital “E” engineers (meaning licensed) think about programmers usurping the title “engineer”.

It’s been a joke for decades and decades that “engineer” is used to church up any job, including “domestic engineering” (housekeeping/homemaking).


Sanitation engineer and custodial engineer are older than I am, as well.

I mean, building applications that are maintainable, will fail gracefully, and keeps costs low, has all the same needs as any classic engineering discipline. You could spend just as much time designing a well thought out CLI as it could take to design a bridge or a sewer system.

Whether people do, or not, is a different question.


Not true anymore, lots of places want to reduce engineering hours and let production eat the cost of defects.


it would be the SA guy, but look like he will also be replaced by a bunch of "vibe coder"


> Who's filling that role in this brave new world?

Us?

(Yeah, we’re fucked)


In a world of vibe coders, those who can still debug on their own, will have quite a valuable skill.


Are least for a few more generations of model.

I just finished creating a multiplayer online party game using only Claude Code. I didn't edit a single line. However, there is no way someone who doesn't know how to code could get where I am with it.

You have to have an intuition on the sources of a problem. You need to be able to at least glance at the correct and understand when and where the AI is flailing, so you know to backtrack or reframe.

Without that you are as likely to totally mess to you app. Which also means you need to understand source control and when to save and how to test methodically.


I was thinking of that, but asking the right questions and learning the problem domain just a little bit "getting the gist of things" will help a complete newbie to generate code for a complex software.

For example in your case there is the concept of message routing where a message that gets sent to the room is copied to all the participants.

You have timers, animation sheets, events, triggers, etc. A question that extracts such architectural decisions and relevant pieces of code will help the user understand what they are actually doing and also help debug the problems that arise.

It will of course take them longer, but it is possible to get there.


So I agree, but we aren't at that level of capability yet. Because at some point currently it inevitably hits a wall and you need to dig deeper to push it out of the rut.


no code has been the hot new thing for the past 40 years.


Surely there is a scale limit to how big the application can be using this approach.


Why do you say that? I would argue that as long as your tests and interfaces are clearly defined no reason it couldn't scale indefinitely.


Practically there is with Claude Code at least.

Hypothetically, if you codified the architecture as a form of durable meta tests, you might be able to significantly raise the ceiling.

Decomposing to interfaces seems to actually increase architectural entropy instead of decrease it when Claude Code is acting on a code base over a certain size/complexity.


Curious, do you supervise the code itself or at least understand what the code is trying to do?

By "I didn't edit a single line", do you still prompt the agent to fix any issues you found? If so, is that consided an edit?


So yes and no. I often just let it work by itself. Towards the very end when I had more of a deadline I would watch and interrupt it when it was putting implementations in places that broke its architecture.

I think only once did I ever give it an instruction that was related to a handful of lines (There certainly were plenty of opportunities, don't get me wrong).

When troubleshooting occasionally I did read the code. There was an issue with player to player matching where it was just kind of stuck and gave it a simpler solution (conceptually, not actual code) that worked for the design constraints.

I did find myself hinting/telling it to do things like centralize the CSS.

It was a really useful exercise in learning. I'm going to write an article about it. My biggest insight is that "good" architecture for an current generation AI is probably different than for humans because of how attention and context works in the models/tools (at least for the current Claude Code). Essentially "out of sight out of mind" creates a dynamic where decomposing code leads to an increase in entropy when a model is working on it.

I need to experiment with other agentic tools to see how their context handling impacts possible scope of work. I extensively use GitHub Copilot, but I control scope, context, and instructions much tighter there.

I hadn't really used hands off automation much in the past because I didn't think the models were at a level that they could handle a significantly sized unit of work. Now they can with large caveats. There also is a clear upper bound with the Claude Code, but that can probably be significantly improved by better context handling.


so if you're an experienced, trained developer you can now add AI as a tool to your skill set? This seems reasonable, but is also a fundamentally different statement that what every. single. executive. is parroting to the echochamber.


I already imagine future devs wide-eyed saying things like: "He/She can _debug_!!"


I have a strong memory from the start of my career, when I had a job setting up Solaris systems and there was a whispered rumour that one of the senior admins could read core files. To the rest of us, they were just junk that the system created when a process crashed and that we had to find and delete to save disk space. In my mind I thought she could somehow open the files in an editor and "read" them, like something out of the Matrix. We had no idea that you could load them into a debugger which could parse them into something understandable.


I once showed a reasonably experienced infrastructure engineer how to use strace to diagnose some random hangs in an application, and it was like he had seen the face of God.


This reminded me of Asimov's short story "The Feeling of Power" https://hex.ooo/library/power.html


Thank you for that, what a fun read :D


I mean I already heard comments about myself when I went and RTFM'd

"You read manuals?!?"

"... Yeah? (pause) Wait, you don't?!?!?"


(Anecdote) Best job I ever had, I walked in and they were like "yeah, we don't have any training or anything like that", but we've got a fully setup lab and a rotating library of literature. <My Boss> "Yeah I'm not going to be around, but here are the office keys" don't blow up the company pretty much.


I don't really see the connection here, but it was a nice anecdote of a trusting environment.


To be honest, I do find most manuals (man pages) horrible to quickly get information how to do something and here LLMs do shine for me (as long as they don't mix up version numbers).


For man pages, you have to already know what you wants to do and just want information on how exactly to do it. They're not for learning about the domain. You don't read the find manual to learn the basics of filesystems.


I love manual pages, at least on/from OpenBSD.


Imagine reading in 2025, when you can just watch tiktok about it!

/s


Pretty much. The hesitancy to read documentation was there long before TikTok and LLMs.

"Teach me how to use Linux [but I hate reading documentation]".

It infuriated me.


Yesterday I tried to vibe install (TM) marker-api docker image and failed miserably. Still, I was able to try. :)


I mean the process either works, or it doesn’t. Meaning it either brings in the expected value with acceptable level of defects or it doesn’t.

From a higher up’s perspective what they do is not that different from vibe coding anyway. They pick a direction, provide a high level plan and then see as things take shape, or don’t. If they are unhappy with the progress they shake things up (reorg, firings, hirings, adjusting the terminology about the end goal, making rousing speeches, etc)

They might realise that they bet on the wrong horse when the whole site goes down and nobody inside the company can explain why. Or when the hackers eat their face and there are too many holes to even say which one they did come through. But these things regularly happen already with the current processes too. So it is more of a difference in degree, not kind.


I agree with this completely. I get the impression that a lot of people here think of software development as a craft, which is great for your own learning and development but not relevant from the company's perspective. It just has to work good enough.

Your point about management being vibe coding is spot on. I have hired people to build something and just had to hope that they built it the way I wanted. I honestly feel like AI is better than most of the outsourced code work I do.

One last piece, if anyone does have trouble getting value out of AI tools, I would encourage you to talk to/guide them like you would a junior team member. Actually "discuss" what you're trying to accomplish, lay out a plan, build your tests, and only then start working on the output. Most examples I see of people trying to get AI to do things fail because of poor communication.


> I get the impression that a lot of people here think of software development as a craft, which is great for your own learning and development but not relevant from the company's perspective. It just has to work good enough.

Building the thing may be the primary objective, but you will eventually have to rework what you've built (dependency changes, requirement changes,...). All the craft is for that day, and whatever that goes against that is called technical debt.

You just need to make some tradeoffs between getting the thing out the faster possible and being able to alter it later. It's a spectrum, but instead of discussing it with the engineers, most executive suites (and their manager) wants to give out edicts from high.


> Building the thing may be the primary objective, but you will eventually have to rework what you've built (dependency changes, requirement changes,...). All the craft is for that day, and whatever that goes against that is called technical debt.

This is so good I just wanted to quote it so it showed up in this thread twice. Very well said.


The whole auto factory thing sounds completely misinformed to me. Just because a machine made it does not mean the output isn't checked in a multitude of ways.

Any manufacturing process is subject to quality controls. Machines are maintained. Machine parts are swapped out long before they lead to out-of-tolerance work. Process outputs are statistically characterised, measured and monitored. Measurement equipment is recalibrated on a schedule. 3d printed parts are routinely X-rayed to check for internal residue. If something can go wrong, it sure as hell is checked.

Maybe things that can't possibly fail are not checked, but the class of software that can't possibly fail is currently very small, no matter who or what generates it.


Additionally production lines are all about doing the same thing over and over again, with fairly minimal variations.

Software isn't like that. Because code is relatively easy to reuse, novelty tends to dominate new code written. Software developers are acting like integrators in at least partly novel contexts, not stamping out part number 100k of 200k that are identical.

I do think modern ML has a place as a coding tool, but these factory like conceptions are very off the mark imo.


But they are processing data 100k out of 200k in identical ways.


On the auto factory side, the Toyota stuck gas pedal comes to mind, even if it can happen only under worst-case circumstances. But that's the (1 - 0.[lots of nines]) case.

On the software side, the THERAC story is absolutely terrifying - you replace a physical interlock with a software-based one that _can't possibly go wrong_ and you get a killing machine that would probably count as unethical for executions of convicted terrorists.


THERAC was terrible. And intermittent to for extra horror.

I am a strong proponent of hardware level interlocks for way more mundane things than that. It helps a lot in debugging to narrow down the possible states of things.


A buddy of mine was a director in a metrology integration firm that did nothing but install lidar, structured light and other optical measurement solutions for auto assembly plants. He had a couple dozen people working full time on new model line build outs (every new model requires substantial refurb and redesign to the assembly line) and ongoing QA of vehicles as they were being manufactured at two local Honda plants. The precision they were looking for is pretty remarkable.


> Any manufacturing process is subject to quality controls.

A few things on this illusion:

* Any manufacturer will do everything in their power to avoid meeting anything but the barest minimums of standards due to budget concerns

* QA workers are often pressured to let small things fly and cave easily because they simply do not get paid enough to care and know they won't win that fight unless their employer's product causes some major catastrophe that costs lives

* Most common goods and infrastructure are built by the lowest bidder with the cheapest materials using underpaid labor, so as for "quality" we're already starting at the bottom.

There is this notion that because things like ISO and QC standards exist, people follow them. The enforcement of quality is weak and the reach of any enforcing bodies is extremely short when pushed up against the wall by the teams of lawyers afforded to companies like Boeing or Stellantis.

I see it too regularly at my job to not call out this idea that quality control is anything but smoke and mirrors, deployed with minimal effort and maximum reluctance. Hell, it's arguably the reason why I have a job since about 75% of the machines I walk in their doors to fix broke because they were improperly maintained, poorly implemented or sabotaged by an inept operator. It leaves me embittered, to be honest, because it doesn't have to be this way and the only reason why it is boils down to greed and mismanagement.


> Any manufacturer will do everything in their power to avoid meeting anything but the barest minimums of standards due to budget concerns

Perhaps this is industry dependent?

In my country’s automotive industry, quality control standards have risen a lot in the past few decades. These days consumers expect the doors and sunroof not to leak, no rust even after 15 years being kept outdoors, and the engine to start first time even after two weeks in an airport carpark.

How is this achieved? Lots of careful quality checking.


That's great! What country are you in?

For context, I am in the US and in a position to see what goes on behind the scenes in most of the major auto-maker factories and some aerospace, but that's about as far as I can talk about it, since some of them are DoD contracters.

Quality Control is a valuable tool when deployed correctly and, itself, monitored for consistency and areas where improvement can happen. There is what I consider a limp-wristed effort to improve QC in the US, but in the end, it's really about checking some bureaucratic box as opposed to actually making better product, although sometimes we get lucky and the two align.


You don't work in a job with real QC. Do some pharma work and then get back to me.


Your comment is adversarial for no reason. "Real" QC? I work for people who make vehicles on the ground and in the air that hold passengers expecting to be delivered safely. Some of them even make parts for very large structures that are expected to remain standing when the wind blows too hard or the ground shakes too much. Let's talk about "real" QC and these other imagined types that must exist to you.

Can you define the differences between "real" QC and other versions? Does this imply a "fake" QC? Does that mean that our auto and aerospace manufacturers can't hold themselves to the same quality standards as Big Pharma, since both are ultimately trying to achieve the same goal in avoiding the litigation that comes with putting your customers at risk?

Let's not pretend that pharma co's have never side-stepped regulation or made decisions that put swaths of the population in a position to harm themselves.

My argument was dispelling the general idea that just because rules are in place, they are being followed. Believe me, I'd love to live in that world, but have seen little evidence that we do.


I had to scroll back to see if the same poster calling OP adversarial was GP. And it was.

Your > * Any manufacturer will do everything in their power to avoid meeting anything but the barest minimums of standards due to budget concerns*

set the adversarial bar, and OP was just countering in kind.


Just a reader of this thread, but that wasn't my take on it. The text you quoted was, I think, an overgeneralisation (there are certainly manufacturers who perform above the baseline standards), but I don't think it was worded adversarially? It then provided some more information (some of which I have heard from others in the industry, especially around QA being pressured to pass defective items).

The post they are complaining about was a driveby dismissive statement that didn't add anything to the discussion whatsoever.


Huge difference between me saying manufacturers will cut corners in any way they can (maybe you're taking this as consumer vs manufacterer?) and the person who replied to me saying I don't work a job that encounters "real" (read intentionally vague and diminutive) QC standars. One is a blanket statement that is backed by easily accessed and very public evidence, the other is personal attack.

Please do not conflate the two.


I'm not sure how it's a personal attack, it would be like someone who bakes bread for a living saying that the tolerance of products doesn't really matter, compared with a machinist who knows exactly how much it can matter. It's quantifiably true that serious QA is a thing, your industry just doesn't have it. If you choose to turn that into a personal attack I think that says more about your internal state than it does about the actual post I made.

> * Any manufacturer will do everything in their power to avoid meeting anything but the barest minimums of standards due to budget concerns

in capitalistic countries, yes


Communism doesn't exactly have a great reputation for quality either. This is a human problem.


It’s been said that every jetliner that is in the air right now has a 1in hairline fracture in it somewhere? But that the plane is designed for the failure of any one or two parts?

Software doesn’t exactly work the same way. You can make “AI” that operates more like [0,1] but at the end of the day the computer is still going to {0,1}.


You can build redundancies into software similar to what you’re alluding to in airplanes. Some safety critical applications require it.


Something I've been thinking about is that most claims of AI productivity apply just as well (and more concretely and reliably) to just... better tooling and abstractions

Code already lets us automate work away! I can stamp out ten instances of a component or call a function ten times and cut my manual labor by 90%

I'm not saying AI has nothing to add, but the "assembly line" analogies - where we precisely factor out mundane parts of the process to be automated - is what we've been doing this whole time

AI demands a whole other analogy. The intuitions from automating factories really don't apply, imo.

Here's one candidate: AI is like gaining access to a huge pool of cheap labor, doing tasks that don't lend themselves to normal automation. Something like when manufacturing got offshored to China in the late 20th century

If you're chronically doing something mundane in software development, you're doing something wrong. That was true even before AI.


100%. I keep thinking this, and sometimes saying it.

Sure, if you're stuck in a horrible legacy code base, it's harder. But you can _still_ automate tedious work, given you can manage to put in the proverbial stop for gas. I've seen loads of developers just happily copy paste things together, not stopping to wonder if it was perhaps time to refactor.


Exactly that. Software development isn't about writing code, never was, it's about what code to write. Doesn't matter if I type in the code or tell an AI what code it should type.

I'll admit that assuming it's correct, an AI can type faster than me. But time spent typing represents only a fraction of the software development cycle.

But, it'll take another year or two on the hype cycle for the gullible managers being sold AI to realise this fully.


Worse: If typing in code takes more time (i.e. costs more), there's a larger incentive to refactor.

I spent quite a bit of time as a CTO, and at some point there's a conversation about the business value of refactoring. That's a great conversation to have I think, it should ultimately be about business value, but good code vs bad code is a bit hard to quantify. What I usually reached for is that refactoring brings down lead time of changes, i.e. makes them faster. Tougher story these days I guess :D


I've been telling friends and family - and kids interested in going entering this field - this for years (decades actually, at this point), that I don't spend much of my time typing out code.

I've found that it's very hard for people to conceptualize what else it would be that we're spending our time doing.


I think that's a good part of the issue. You have the computer that is doing stuff. And you have the software engineer that was hired to make it do the stuff. And the connection between them is the code. That's pretty much the simplistic picture that everyone has.

But the truth is that the way the computer works is alien and anything useful becomes very complex. So we've come up with all those abstractions, embed them in programming languages with which we create more abstractions trying to satisfy real world constraints. It's an imaginary world which is very hard to depict to other people. It's not purely abstract like mathematics, nor it's fully physical like mechanics.

The issue with LLMs is whatever they produce have a great chance of being distorted. At first glance, it looks like it's being correct, but the more you add to it, the more visible the flaws are until you're left with a Frankenstein monster.


Yep, well put.

But to your last part, this is why I think the worst fears I see from programmers (here and in real life) are unlikely to be a lasting problem. If you're right - and I think you are - that the direction things may be headed as-is, with increasingly less sophisticated people relying increasingly more on AIs to build an increasingly large portion of software, is going to result in big messes of unworkable software. But if so, people are going to get wise to that, and stop doing it. It won't be tenable for companies to go to market with "Frankenstein monsters", in the long term.

The key is to look through the tumultuous phase and figure out what it's gonna look like after that. Of course this is a very hard thing to predict! But here are the outcomes I personally put the most weight on:

1. AIs might really get good enough that none of us write code anymore, in the same way that it's quite rare to write assembly code now.

In this case, I think entrepreneurship or research will be the way to go. We'll be able to do so much more if software is truly easy to create!

2. We're still writing, editing, and debugging code artifacts, but with much better tools.

In this case, I think actually understanding how software works will be a very valuable skill, as knocking out subtly broken software will be a dime a dozen, while getting things working well will be a differentiator.

Honestly I don't put much weight on the version of this where nobody is doing anything because AI is running everything. I recognize that lots of smart people disagree with me about this, but I remain skeptical.


> AIs might really get good enough that none of us write code anymore, in the same way that it's quite rare to write assembly code now.

I don't have much hope for that, because the move from assembly to higher level programming languages is a result of finding patterns that are highly deterministic. It's the same as metaprogramming currently. It's not much about writing the code to solve a problem, but to find the hidden mechanism behind a common class of problems and then solve that instead. Then it becomes easier to solve each problem inside the class. LLMs are not reliable for that.

> 2. We're still writing, editing, and debugging code artifacts, but with much better tools.

I'd put a lot more weight on that, but we've already have a lot of tooling that we don't even use (or replicate across software ecosystems). I'd care much more about a nice debugger for go than LLMs tooling. Or a modern smalltalk.

But as you point out, the issue is not tooling. It's understanding. And LLMs can't help with anything if you're not improving that.

[0]: https://lisp-docs.github.io/cl-language-reference/chap-6/g-c...


I probably should have specified: I didn't list those in the order of what I put most weight on. I agree with you that I more heavily weight the one I wrote as #2.

I think you and I probably mostly agree on where things are heading, except that just inferring from your comment, I might be more bullish than you on how much AIs will help us develop those "much better tools".


> It's an imaginary world which is very hard to depict to other people. It's not purely abstract like mathematics, nor it's fully physical like mechanics.

This is one of the reasons I like the movie Hackers - the visualizations are terrible if you take it at face value, but if you think of it as a representation of what's going on inside their minds it works a whole lot better, especially compared to the lines-of-code-scrolling-past version usually shown in other movies/tv.

For anyone who doesn't know what I'm talking about, just the hacking scenes: https://youtu.be/IESEcsjDcmM?t=135


The correct analogy is that software engineers design and build the _factory_. The software performs the repeatable process as defined by code, and no person sits and watches if each machine instruction is executed correctly.

Do you really want your auto tool makers to not ensure the angle of the tools are correct _before_ you go and build 10,000 (misshaped) cars?

I’m not saying we don’t embrace tooling and automation as appropriate at the next level up, but sheesh that is a misguided analogy.


Yes and this is just garden-variety abstraction and toolmaking, which is what programmers have done since the very beginning


> The correct analogy is that software engineers design and build the _factory_.

This is, I think very important especially for non-technical managers to grasp (lol, good luck with that).


works for tesla


Depends on your definition of "works".

Do they YOLO the angles of tools and then produce 10,000 misshapen cars? Yes. But do they also sell those cars? Impressively, also yes, at least up until a couple months ago. Prior to Elon's political shenanigans of the last few months consumers were remarkably tolerant of Tesla's QC issues.


> It would be crazy if in an auto factory people were measuring to make sure every angle is correct

They are.

Mechanical engineers measure more angles and measurements than a consultant might guess - its a standard part of quality control, although machines often do the measuring with the occasional human sampling as a back-up. You'd be suprised just how much effort goes into getting things correct such as _packs of kitkats_ or _cans of coke_.

If getting your angles wrong risks human lives, the threat of prosecution usually makes the angles turn out right, but if all else fails, recalls can happen because the gas pedal can get stuck in the driver-side floor carpet.

Assembly-line engineering has your favour that (A) CNC machines don't randomly hallucinate; they can fail or go out of tolerance, but usually in predictable ways and (B) you can measure a lot of things on an assembly line with lasers as the parts roll through.

It was thankfully a crazy one-off that someone didn't check that _the plugs were put back into the door_, but that could be a sign of bad engineering culture.


>It would be crazy if in an auto factory people were measuring to make sure every angle is correct

To someone who used to automate assembly plants, sounds to me as a rationalization of someone who has never worked in manufacturing. Quality people rightly obsess over whether or not the machine is making “every angle” correct. Imagine trying to make a car where parts don’t fit together well. Software tends to have even more interfaces, and more failure modes.

I’ve also worked in software quality and people are great at rationalizing reasons for not doing the hard stuff, especially if that means confronting an undesired aspect of their identity (like maybe they aren’t as great of a programmer as they envision). We should strive to build processes that protect us from our own shortcomings.


> Imagine trying to make a car where parts don’t fit together well.

Don't have to imagine, just walk over to your local Tesla dealership.


What strikes me the most is not even that people are willing to do that, to just fudge their work until everything is green and call it a day.

The thing that gets me is how everyone is attaching subsidized GPU farms to their workflows, organizations and code bases like this is just some regulated utility.

Sooner of later this whole LLM thing will get monetized or die. I know that people are willing to push bellow par work. I didn't know people were ready to put on the leash of some untested new sort of vendor lock-in so willingly and even arguing this is the way. Some may even have the worst of the two worlds and end up on the hook for a new class of sticker shock, pay down and later have these products fail from under them and left out to dry.

Someone will pay for these models, the investors or the users so dependent they'll pay whatever price is charged.


Harper talks a lot about using defensive coding (tests, linters, formal verification, etc) so that its not strictly required to craft and understand everything.

This article (and several that follow) explain his ideas better than this out of context quote.

https://harper.blog/2025/02/16/my-llm-codegen-workflow-atm/


The issue is that, (the way I see it happening more and more in the real world): - tests are ran by machines - linters are (being) replaced by machines/SaaS services (so.. machines) - formal verification: yes - SHure, 5 people will review the thousands lines of code written every day, in a variety of languages/systems/stacks/scripts/RPAs/etc. or they will simply check that the machines return "green-a-OK" and ask the release team to push it to production.

The other thing that I have noticed, is that '(misplaced) trust erodes controls'. "hey the code hasn't broke for 6 months, so let's remove ABC and DEF controls", and then boom goes the app (because we used to test integration but 'come on - no need for that).

Now.. this is probably the paranoid (audit/sec) in me, but stuff happens, and history repeats itself.

Also.. Devs are cost center, not a profit center. They are "value enablers" not "value adders". Like everything and everyone else, if something can be replaced with something 'equally effective' and cheaper, it is simply a matter of time.

I feel that companies want to both run for this new gold-rush, while at the same time do it slowly and see if this monster bites (someone else first).


> Devs are cost center, not a profit center. They are "value enablers" not "value adders".

I don’t understand this and I think it would require breaking my brain in order to.

A person pays the company to provide a service.

The developers create and maintain (part of) that service.


It’s pretty simple. Software quality isn’t on spreadsheet. The cost to build it is.

The value of the products coming from research and development are not on the spreadsheet everyone is looking at. The cost to develop them is.

If it’s not on the spreadsheet, it doesn’t exist to the people who make the decisions about money. They have orders to cut spending and that’s what they’ll do.

This may sound utterly insane, but business management is a degree and job. Knowledge about what you are managing is secondary to knowing how to manage.

That’s why there is an entire professional class of people who manage the details of a project. They also don’t need to know what they are getting details for. Their job is to make numbers for the managers.

At no point does anyone actually care what these numbers mean for the company as a whole. That’s for executives.

Many executives just look at spreadsheets to make decisions.


A business tends to break its components down into different units or departments, and then (from a financial perspective) largely boils down those units into "how much money do you spend" and "how much money did you bring in as income." Software being a cost center means that the expected income of the unit is $0, and thus it shouldn't be judged on its operating profit. It doesn't mean that software doesn't have value, that investment in software doesn't bring greater rewards.

But it does mean that the value that software brings isn't directly attributable to investment in software (as far as the business can see). And being more invisible means that it tends to get the shaft somewhat on the business side of things, because the costs are still fully visible.


Yes. You either MAKE money or SPEND money (sorry for the caps).

Audit, Security, IT (internal infra people), cleaning personnel, SPEND money.

Sales, Product Development, MAKE money.

Once the "developers" can charge per-hour to the clients, then we love them because they BRING money. But those 'losers' that slow down the 'sales of new features' with their 'stupid' checks and controls and code-security-this and xss-that, are slowing down the sales, so they SPEND money and slow down the MAKING of money.

Now, in our minds, it is clear that those 'losers' who do the code check 'stuff' are making sure that whoever buys today, will come and buy again tomorrow. But as it has been discussed here, the CEOs need to show results THIS quarter, so fire 20% of the security 'losers' to reduce HR costs, hire 5 prompt engineers to pump out new features, and pray to your favourite god that things don't go boom :)

Meanwhile most CEOs have a golder parachute, so it is definitely worth the risk!


The point is devs aren't sales/client facing. So from the customers perspective, it's just a delivery detail.


More to the point, there are people who carefully ensure that those angles are correct, and that all of the angles result in the car’s occupants arriving at their destination instead of turning into a ball of fire. It’s just that this process takes place at design time, not assembly time.

Software the same way. It’s even more automated than auto factories. Assembly is 100% automated. Design is what we get paid to do, and that requires understanding, just like the engineers at Ford need to understand how their cars work.


You're just not visionary enough to see the potential in vibe-CADing an automobile!


You joke, but there are plenty of companies working on something similar to this, for example: https://www.sw.siemens.com/en-US/technology/generative-desig...

I haven't touched CAD for a couple of years, but I get the impression that (inevitably) the generative design hype significantly exceeds the current capability.


I see you're familiar with zoo. [0]

[0] https://zoo.dev/design-studio


You just needs an MBA to see that


Seems like the more distanced we get from actual engineering methods, the more fucked up our software and systems become. Not really surprising to be honest. Just look at the web as an example. Almost everyone is just throwing massive frameworks and a hundred libraries as dependencies together and calls it a day. No wait, must apply the uglyfier! OK now call it a day.


There's no incentive for engineering methods because there's no liability for software defects. "The software is provided “as is”, without warranty of any kind, express or implied, including but not limited to the warranties of merchantability." It's long overdue but most people in the software industry don't want liability because it would derail their gravy train.


I'd love to blame shitty web apps on outsourcing of computing power to your users... But you'll hear countless stories of devs blowing up their AWS or GCP accounts with bad code or architecture decisions (who cares if this takes a lot of CPU to run, throw another instance at it!) , so maybe it's just a lazy/bad dev thing.


> That just strikes me as an odd thing to say.

Is it though? It could be interpreted as an acknowledgement. Five years from now, testing will be further improved, yet the same people will be able to take over your iPhone by sending you a text message that you don't have to read. It's like expecting AI to solve the spam email problem, only to learn that it does not.

It's possible to say "we take the security and privacy of our customers seriously" without knowing how the code works. That's the beauty of AI. It legitimizes and normalizes stupid humans without measurably changing the level of human stupidity or the quality or efficiency of the product.

Sold! Sold! Sold!


And except that the factory analogy of software delivery has always been utterly terrible.

If you want to draw parallels between software delivery and automotive delivery then most of what software engineers do would fall into the design and development phases. The bit that doesn’t: the manufacturing phase - I.e., creating lots of copies of the car - is most closely modelled by deployment, or distribution of deliverables (e.g., downloading a piece of software - like an app on your phone - creates a copy of it).

The “manufacturing phase” of software is super thin, even for most basic crud apps, because every application is different, and creating copies is practically free.

The idea that because software goes through a standardised workflow and pipeline over and over and over again as it’s built it’s somehow like a factory is also bullshit. You don’t think engineers and designers follow a standardised process when they develop a new car?

It would be crazy for auto factory workers to check every angle. It is absolutely not crazy for designers and engineers to have a deep understanding of the new car they’re developing.

The difference between auto engineering and software engineering is that in one your final prototype forms the basis for building out manufacturing to create copies of it, whereas in the other your final prototype is the only copy you need and becomes the thing you ship.

(Shipping cadence is irrelevant: it still doesn’t make software delivery a factory.)

This entire line of reasoning is… not really reasoning. It’s utterly vacuous.


It’s not only vacuous. It’s incompentent by failing to recognize the main value add mechanisms. In software delivery, manufacturing, or both. I’m not sure if Dunning-Krueger model is scientifically valid or not but this would be a very Dunning-Krueger thing to say (high confidence, total incompentence).


> The “manufacturing phase” of software is super thin, even for most basic crud apps, because every application is different, and creating copies is practically free.

This is not true from a manager's perspective (indoctrinated by Taylorism). From a manager's perspective, development is manufacturing, and underlying business process is the blueprint.


I don't know about that: I'm a manager, I'm aware of Taylorism (and isn't that guy discredited by sensible people anyway?), and I don't think the factory view holds up. Manufacturing is about making the same thing (or very similar things) over and over and over again at scale. That almost couldn't be further from software development: every project is different, every requirement is different, the effort in every new release goes into a different area of the software. Just because, after it's gone through the CD pipeline the output is 98% the same is irrelevant, because all the software development effort for that release went into the 2%.


> The idea that because software goes through a standardised workflow and pipeline over and over and over again as it’s built it’s somehow like a factory is also bullshit.

I don't think it's bs. The pipeline system is almost exactly like a factory. In fact, the entire system we've created is probably what you get when cost of creating a factory approaches instantaneous and free.

The compilation step really does correspond to the "build" phase in the project lifecycle. We've just completely automated by this point.

What's hard for people to understand is that the bit right before the build phase that takes all the man-hours isn't part of the build phase. This is an understandable mistake, as the build phase in physical projects takes most of the man-hours, but it doesn't make it any more correct.


You're misunderstanding my meaning with pipeline: you're thinking it's just the CD part of the equation. I'm thinking about it as the whole software development and delivery process (planning, design, UX, dev, test, PR reviews, CD, etc), which can be standardised (and indeed some certifications require it to be standardised). In that context, even when the development pipeline follows a standardised process, most of it's nothing like a factory: just the CD part, as you've correctly identified. Because the output of CD will be, for mature software, 99+% similar to the output of the previous build - it is definitely somewhat analagous to manufacturing, although if you think about adding tests, etc., the process evolves a lot more often and rapidly than many production lines.


> You don’t think engineers and designers follow a standardised process when they develop a new car?

Apparently the Cybertruck did not. And that sort of speaks for itself.


It's a reckless stance that should never ever come from a software professional. "Let's develop modern spy^H^H^Hsoftware in the same way as the 737 Max, what could possibly go wrong?"


One of the reasons outsourcing for software fizzled out some compared to manufacturing is because in a factory you don't have "measuring to make sure every angle is correct" because (a) the manufacturing tools are tested and repeatable already and (b) the heavy lifting of figuring out all the angles was done ahead of time. So it was easy to mechanize and send to wherever the labor was cheapest since the value was in the tools and in the plans.

The vast majority of software, especially since waterfall methods were largely abandoned, has the planning being done at the same time as the "execution". Many edge cases aren't discovered until the programmer says "oh, huh, what about this other case that the specs didn't consider?" And outsourcing then became costly because that feedback loop for the spec-refinement ran really slowly, or not at all. Spend lots of money, find out you got the wrong thing later. So good luck with complex, long-running projects without deeply understanding the system.

Alternately, compare to something more bespoke and manual like building a house, where the tools are less precise and more of the work is done in the field. If you don't make sure all those angles are correct, you're gonna get crappy results.

(The most common answer here seems to be "just tell the agent what was wrong and let it iterate until it fixes it." I think it remains to be seen how well "find out everything that is wrong after all the code is written, and then tell the coding agent(s) to fix all of those" will work in practice. If nothing else, it will require a HUGE shift in manual testing appetite. Maybe all the software engineers turn into QA engineers + deployment engineers.)


> outsourcing for software fizzled out

Any data on that? I see everyone trying to outsource as much as they can. Sure, now it is moving toward AI, but every company I walk into have 10-1000s of FTEs in outsource countries.

I see most fortune 1000 companies here doing some type of agile/planningexecution which is in fact more waterfall. The people here in the west are more management and client facing, the rest is 'thrown over the fence'.


If they're FTEs, that's not outsourcing, it's moving your employees to a cheaper location.

Outsourcing means laying off your FTEs and shoving the entire project over to a WITCH consulting shop.


FTE for lack of a better word: they are not employees of the company, they are full-time working for the company; they are employed by some outsourcing place. FTE I guess means employee of the western country and that they are not, but what would be the term? Full time remote worker?


Maybe you mean "contractor" ?


A lot of those deals between western companies and companies in developing countries are not fixed-time contracts but ongoing collaborations.


And that is not outsourcing? Definitely called that here...


The point being made here is that the biggest software companies still employ lots of programmers but the biggest manufacturing companies don't employ lots of factory workers. I don't think you need data just think about Microsoft vs General Motors etc etc


Sure, but my claim is that these things are still outsourced en masse and nothing fizzled out from that perspective.


It did fizzle out the first time round. It won't on this occasion.


The lead poisoning of our time, companies getting high on hype-tech, killed off because the "freedom from programmers- no code tools" create gordion project nods.

And all because the MBAs yearn for freedom from dependencies and thus reality.


That's wild. You can't say this statistical, hyper-generalised system of "AI" is in any way comparable to the outputs of highly specific, highly deterministic machinery. It's like comparing a dice roll to a clock. If anything reviewing engineers now need to "measure the angles" more closely than ever.


> It would be crazy if in an auto factory people were measuring to make sure every angle is correct

That cannot be any furthest from the truth.

Take a decent enterprise CNC machine (look in youtube, lots of videos) that is based on servos, not the stepper motor amateur machines. That servo-based machine is measuring distances and angles hundreds of times per second, because that is how it works. Your average factory has a bunch of those.

Whoever said that should try getting their head out of their ass at least every other year.


> Reed’s statement feels very much like a justification of "if it compiles, ship it!"

Not really. More like, if the fopen works fine, don't bother looking how it does so.

SWE is going to look more like QA. I mean, as a SWE if I use the webrtc library to implement chat and it works almost always but just this once it didn't, it is likely my manager is going to ask me to file a bug and move on.


>It would be crazy if in an auto factory people were measuring to make sure every angle is correct

Yeah, but there's still something checking the angles. When an LLM writes code, if it's not the human checking the angles, then nothing is, and you just hope that the angles are correct, and you'll get your answer when you're driving 200 km/h on the Autobahn.


>It would be crazy if in an auto factory people were measuring to make sure every angle is correct

They need to read “the code is the design”. When you are cutting the door for the thousandth car, you are past design and into building.

For us building is automatic - take code turn into binary.

The reason we measure the door is we are at the design stage abs you need to make sure everything fits.


The ultimate irony being that they actually do measure all kinds of things against datum points as the cars move down the line as the earlier you junk a faulty build the less it's cost you.


To me the biggest difference between a software engineer and an assembly worker is the worker makes cars, the software engineer makes the assembly line.


It’s almost like he doesn’t know what a programming language is or what a compiler is for.

Agree, this comment makes no sense.


Let them do it. It will naturally explode on their faces. Let Amazon be the early adopters of this trend.


The fact that they used the auto industry as an example is funny, because the Toyota way and six sigma came out of that industry.

So you are telling me, your AI code passes a six sigma grade of quality control?

I have a bridge to sell you. No, Bridges!


"The fact that they used the auto industry as an example is funny, because the Toyota way and six sigma came out of that industry."

It's even funnier when you consider that Toyota has learned how bad of an idea lean manufacturing/6-Sig/5S can be thanks to the pandemic - they're moving away from it in some degrees, now.


Technically, Toyota doesn’t use 6sig, and when you say they are moving away from it, what do you mean? Because I would be deeply amused (and shocked) if the Muda, Muri people would be giving up on quality control.


Idk about Toyota moving away from Kaizen, but they certainly have moved away from JIT. Toyota pioneered Just In Time (JIT) part inventory which dramatically lowers inventory costs and makes balance sheets look far more attractive.

What Toyota realized in 2011 due to the Fukushima disaster however is that this completely fails for computer chips because the pipeline is too long. So they kept JIT for steel, plastic parts etc but for microcontrollers, power supply chips, etc they stockpile large quantities.


JIT is totally independent from 6-sig.


They all come together to form a cohesive quality and production program.


> six sigma came out of that industry

Six Sigma came out of Motorola, who still practice it today.

It was then adopted by the likes of GE, before finding its way into the automotive and many other manufacturing industries.


It's wild to think that someone who purports to be an expert would compare an assembly line to AI, where the first is the ultra-optimization of systems management with human-centric processes thoughtfully layered on top and the latter a non-deterministic black box to everybody. It's almost like they are willfully lying...


I keep feeling like there's a huge disconnect between the owners/CTOs/managers with how useful they think LLMs _are supposed to be_, vs the people working on the product and how useful they think LLMs _actually are_. The article describes Harper Reed as a "longtime programmer", so maybe he falsifies my theory? From Wikipedia:

>Harper Reed is an American entrepreneur

Ah, that's a more realistic indicator of his biases. Either there's some misunderstanding, or he's incorrect, or he's being dishonest; it's my job to make sure the code that I ship is correct.


His comment is a reflection of the future Software operating model they’ll have.


That's a weird / wrong analogy. Software engineers design and program the robots, they don't build the cars.


That analogy wouldn't give a person much confidence in the rest of his ability/judgement.


>>That just strikes me as an odd thing to say.

This is along the same lines as why I don't doubt the syntactical features to break. I assume they work. You have to accept some abstraction work, and build on top of it.

We will reach some point where we will have to assume AI is generating the correct code.


Car factory is a funny example because that's also one of the least likely places you will see AI assisted coding. Safety critical domains are a whole different animal with SysML everywhere and thousand page requirements docs.


It was not a strong metaphor! Can't win them all.

You are correct tho. I do think that we are approaching the point of "If it compiles, ship it"


Agreed. Real world is governed by physics, which tend to work well very well with interpolations, predictability and statistics


also software is a factory and the LLM is a workshop building factories. Also I strongly believe that people building factories still do a lot of a) ten people pounding out the metal AND b) measuring to check


And yet people here keep arguing proper cs education is not important.

Without proper understanding of CS, this is what we get. Lack of rigour.


Companies with lack of engineering rigor are basically pre filtered customers packaged for a buyout by a company/vc able to afford more rigorous companies structure.


Amazing. What has Harper Reed written? Does he have any significant output?

This is frankly the most idiotic statement I have heard about programming yet.


This made the rounds recently: https://harper.blog/2025/02/16/my-llm-codegen-workflow-atm/

That quote really misrepresents his writing.


Then one wonders what the agenda of the NYT is here. Does this also misrepresent Willison's writing?

And just as the proliferation of factories abroad has made it cheap and easy for entrepreneurs to manufacture physical products, the rise of A.I. is likely to democratize software-making, lowering the cost of building new apps. “If you’re a prototyper, this is a gift from heaven,” Mr. Willison said. “You can knock something out that illustrates the idea.”

Why do they cite bloggers who relentlessly push this technology rather than interviewing a representative selection of programmers?


Probably because the author's main focus was about potential change in working conditions and not the veracity of the AI hype. But disappointing nonetheless.

To be clear, I'm sure some critical software engineering jobs will be replaced by AI though. Just not in the way that zealots want us to think. From the looks of it right now, AI is far from replacing software engineers in terms of competence. The utter incompetence was in full public display just last week [1]. But none of that will matter to greedy corporate executives, who will prioritize short-term cost savings. They will hop from company to company, personally reaping the benefits while undermining essential systems that users and society rely on with AI slop. That's part of the reason why the C-suites are overhyping the technology. After all, no rich executive has faced consequences for behaving this way.

[1]: https://news.ycombinator.com/item?id=44050152


What does your link prove? That article is rambling nonsense that is pushing AI. It confirms the NYT quote.


In the beginning there was a saying, "nobody cares what your code looks like, as long as it works". We went full circle.


To me it feels like part of the hype train, like crypto & VR.

I recently had the (dis)pleasure of fixing a bug in a codebase that was vibe coded.

It ends up being a collection of disorganized business problems converted into code, without any kind of structure.

Refinements are implemented as super-narrow patches, resulting in complex and unorganized code, whereas a human developer might take a step back to try and extract more common patterns.

And once you reach the limit of the context window you're essentially suck, as the LLM can no longer keep track of its patches.

English (or all spoken human language) is not precise enough to articulate what you want your code to do, and more importantly, a lot of time and experience precedes code that a senior developer writes.

If you want to have this senior developer 'vibe' code, then you'll need to have a way to be more precise in your prompts, and be able to articulate all learnings from your past mistakes and experience.

And that is incredibly heavy. Remember, this is opposite from answering 'why did you write it like this'. This is an endless list of items that say 'don't do this, but this, in this highly specific context'.


Counterpoint: AI has help me refactor things where I normally couldn’t. Things like extracting some common structure that’s present in a slightly different way in 30 places, where cursor detects it, or suggesting potential for a certain pattern.

The problem with vibe coding is more behavioral I think: the person more likely to jump in the bandwagon to avoid writing some code themselves is probably not the one thinking about long term architecture and craftsmanship. It’s a laziness enhancer.


> AI has help me refactor things where I normally couldn’t.

Reading "couldn't" as, you would technically not be able to do it because of the complexity or intricacy of the problem, how did you guarantee that the change offered by the AI made proper sense and didn't leave out critical patterns that were too complex for you to detect ?

Your comment makes it sound like you're now dependent on AI to refactor again if dire consequences are detected way down the line (in a few months for instance), and the problem space is already just not graspable by a mere human. Which sounds really bad if that's the case.


Before I started using advanced IDEs that could navigate project structures very quickly, it was normal to have a relatively poor visibility -- call it "fog of war/code". In a 500,000 line C++ project (I have seen a few in my career), as a junior dev, I might only understand a few thousand lines from a few files I have studied. And, I had very little idea of the overall architecture. I see LLMs here as a big opportunity. I assume that most huge software projects developed by non-tech companies look pretty similar -- organic, and poorly documented and tested.


That's what documentation is for. If you don't have that, AI won't figure it out either.


I'm not sure that's true?


Such a project is way too large for AI to process as a whole. So yes it's true.


I have a question: Many people have spoken about their experience of using LLMs to summarise long, complex PDFs. I am so ignorant on this matter. What is so different about reading a long PDF vs reading a large source base? Or can a modern LLM handle, say, 100 pages, but 10,000 pages is way too much? What happens to an LLM that tries to read 10,000 pages and summarise it? Is the summary rubbish?


Get the LLM to read and summarise N pages at a time, and store the outputs. Then, you concatenate those outputs into one "super summary" and use _that_ as context.

Theres some fidelity loss but it works for text, because there's quite often so much redundancy.

However, I'm not sure this technique could work on code.


It can't handle large contexts, so the way they often do it is file by file, which loses the overall context.


Lots of models CAN handle large contexts, gemini 2.5 pro their latest model can take 1 million tokens of context

What do you think about software such as source insight that gives developers an eagle eye of view of the project?


You raise a good point. I had a former teammate who swore by Source Insight. To repeat myself, I wrote: <<Before I started using advanced IDEs that could navigate project structures very quickly>>. So, I was really talking about my life before I started using advanced IDEs. It was so hard to get a good grasp of a project and navigate quickly.


This makes sense.


Sometimes a problem is a weird combination of hairy/obscure/tedious where I simply don’t have the activation energy to get started. Like, I could do it with a gun to my head.

But if someone else were to do it for me I would gratefully review the merge request.


Reviewing a merge request should require at least the same activation energy as writing the solution yourself, as in order to adequately evaluate a solution you first need to acquire a reference point in mind as to what the right solution should be in the first place.

For me personally, the activation energy is higher when reviewing: it’s fun to come up with the solution that ends up being used, not so fun to come up with a solution that just serves as a reference point for evaluation and then gets immediately thrown away. Plus, I know in advance that a lot of cycles will be wasted on trying to understand how someone else’s vision maps onto my solution, especially when that vision is muddy.


The submitter should also have thoroughly reviewed their own MR/PR. Even before LLMs, coders not having reviewed their own code would be completely discourteous and disrespectful to the reviewer. It's an embarrassing faux pas that makes the submitter and the team all look and feel bad when there are obvious problems that need to be called out and fixed.

Submitting LLM barf for review and not reviewing it should be grounds for termination. The only way I can envision LLM barf being sustainable, or plausible, is if you removed code review altogether.


> The submitter should also have thoroughly reviewed their own MR/PR

What does it mean to have to review your own code as a separate activity? Do many people contribute code that they wrote but… never read?

> Submitting LLM barf

Oh right…


Writing/reading code and reviewing code are distinct and separate activities. It's completely common to contribute code which is not production ready.

If you need an example, it's easy to add a debugging/logging statement like `console.log`, but if the coder committed and submitted the log statement, then they clearly didn't review the code at all, and there are probably much bigger code issues at stake. This is a problem even without LLMs.


Just call it “committing bad code”. LLM autocomplete aside, I don’t see how reviewing own code can happen without either a split personality, or putting enough time that you completely forgot what exactly you were doing and have fresh eyes and mind.

If person A committed code that looks bad to person B, it just means person A commits bad code by the standard of person B, not that person A “does not review own code”.

Maybe it’s a subjective difference, same as you could call someone “rude” or you could say the same person “didn’t think before saying”.


Person A as can commit atrocious code all day, that's fine, but they still need to proofread their MR/PR and fix the outstanding issues. The only way to see outstanding issues is by reviewing the MR/PR. Good writers proofread their documents.


I just don’t see reading your own stuff as a different activity from writing. Generally, there is the author, and proofreader is a dedicated role.


I always review the local diff before pushing. Can sometimes catch typos, or unclear comments or naming issues.

The concept and design were by that point iterated on, so it doesn’t happen that I need to rewrite a significant amount of code.


My preferred workflow requires me to go through every changed chunk and stage them one by one. It’s very easy with vim-fugitive. To keep commits focused, it requires reading every chunk, which I guess is an implicit review of sorts.


I think, if it’s similar to how I feel about it, that it’s more about always being able to do it, but not wanting to expend the mental effort to correctly adjust all those 30 places. Your boss is not going to care, so while it’s a bit better going forward, justifying the time to do it manually doesn’t make sense even to yourself.

If you can do it using an LLM in a few hours however, suddenly making your life, and the lives of everyone that comes after you, easier becomes a pretty simple decision.


So everyone is talking across each other...

AI is a sharp tool, use it well and it cuts. Use it poorly and it'll cut you.

Helping you overcome the activation barrier to make that redactor is great if that truly is what it is. That is probably still worth billions in the aggregate given git is considered billion dollar software.

But slop piled on top of slop piled on top of slop is only going to compound all the bad things we already knew about bad software. I have always enjoyed the anecdote that in China, Tencent had over 6k mediocre engineers servicing QQ then hired fewer than 30 great ones to build the core of WeChat...

AI isn't exactly free and software maintenance doesn't scale linearly


> But slop piled on top of slop piled on top of slop is only going to compound all the bad things we already knew about bad software

While that is true, AI isn’t going to make the big difference here. Whether the slop is written by AI or 6000 mediocre engineers is of no matter to the end result. One might argue that if it were written by AI at least those engineers could do something useful with their lives.


That's an important distinction!

There's a difference between not intellectually understanding something and not being able to refactor something because if you start pulling on a thread, you are not sure what will unravel!

And often there just isn't time allocated in a budget to begin an unlimited game of bug testing whack-a-mole!


To makeitdouble's point, how is this any different with an LLM provided solution? What confidence do you have that isn't also beginning an unlimited game of bug testing whack-a-mole?

My confidence in LLMs is not that high and I use Claude a lot. The limitations are very apparent very quickly. They're great for simple refactors and doing some busy work, but if you're refactoring something you're too afraid to do by hand then I fear you've simply deferred responsibility to the LLM - assuming it will understand the code better than you do, which seems foolhardy.


Especially with refactoring, it tends to be tedious and repetitive work that is slightly too complicated for a (regex) search replace.

A lot of repetitive slight variations on the same easy to describe change sounds pretty good to ask an LLM to do quickly.


As the op, for the case I was thinking about, it’s “couldn’t” as in “I don’t have the time to go checking file by file and the variation is not straightforward enough that grepping will surface cases straightforwardly”.

I’m very much able to understand the result and test for consequences, I wouldn’t think of putting code I don’t understand in production.


> Your comment makes it sound like you're now dependent on AI to refactor again

Not necessarily. It may have refactored the codebase in a way that is more organized and easier to follow.

> how did you guarantee that the change offered by the AI made proper sense and didn't leave out critical patterns that were too complex for you to detect ?

Perhaps extensive testing? Or a prayer.


And once AI is not there anymore, you're back on square one.

It only looks effective if you remove learning from the equation.

It's the wrong tool for the job, that's what it is.


How is it the wrong tool for the job? In this particular case it's excellent, it can help you find proper abstractions.. without them you wouldn't realize.

I kind of view this use case as enhanced code linters


If that's how engineers used it, sure, but instead many pretend they're brilliant by committing 10k lines they don't understand.

I quit my last very good job because I became so fed up with this situation. It was bad enough before the CTO started using LLMs. It was ABSURD after.

(This was a YC company that sold quickly after I quit, at a loss, presumably because they didn't know what else to do)


the classification of a tool being right for the job does not depend on how people use it

someone holding a hammer by the head and failing at getting the nail in doesnt mean a hammer is a bad tool for nailing


> If that's how engineers used it, sure

No part of what I said suggested the tool wasn't capable of being a useful tool.


> And once AI is not there anymore

Do you expect an incoming collapse of modern society?

That's the only case where LLM would be "not there anymore." Even if this current hype train dies completely, there will still businesses providing LLM interference, just far less new models. Thinking LLM would be "not there anymore" is even more delusional than thinking programmer as a job would cease to exist due to LLM.


> It only looks effective if you remove learning from the equation.

It's effective on things that would take months/years to learn, if someone could reasonably learn it on their own at all. I tried vibe coding a Java program as if I was pair programming with an AI, and I encountered some very "Java" issues that I would not have even had the opportunity to get experience in unless I was lucky enough to work on a Fortune 500 Java codebase.

AI doesn't work in a waterfall environment. You have to be able to rapidly iterate, sometimes in a matter of hours, without bias and/or emotional attachment.


> AI doesn't work in a waterfall environment. You have to be able to rapidly iterate, sometimes in a matter of hours, without bias and/or emotional attachment.

What do you mean? There is no difference between waterfall or agile in what you do during a few hours.


Not at all true. You just adopted the wrong model to partner with it. Think of yourself as an old school analyst vs a programmer.

Throw a big context window model like Gemini at it to document the architecture unless good documentation exists. Then use modify that document to drive development of new or modified code.

Many big waterfall projects already have a process for this - use the AI instead of marginally capable offshore developers.


> lucky enough to work on a Fortune 500 Java codebase

Are you being ironic?


I just came out from a cave, what is this vibe coding?


Might not be the case for the senior devs on HN, but for most people in this industry, it's copy/pasting a jira ticket into a LLM, which generates some code that seems to work and a ton of useless comments, then pushing it on github without even looking at it once and then moving to the next ticket.


I have a much less glossy term for this.


Sounds terrifying.


A form of coding by proxy, where the developer instructs (in English prose) an LLM software development agent (e.g. cursor IDE, aider) to write the code, with the defining attribute that the developer never reviews the code that was written.


I review my vibe code, even if it’s just skimming it for linter errors. But yeah, the meme is that people are apparently force pushing what ever gets spat out by an LLM without checking it.

Vibe coding is instructing AI to create/modify an application without yourself looking at or understanding the code. You just go by the "vibe" of how the running application behaves.


I have the same observation. I've been able to improve things I just didn't have the energy to do for a while. But if you're gonna be lazy, it will multiply the bad.


Recent example.

Trying to get some third party hardware working with raspi

The hardware provider provides 2 separate code bases with separate documentation but only supports the latest one.

I literally had to force feed the newer code base into ChatGPT, and then feed in working example code to get it going, else it constantly reference the wrong methods.

If I just kept going Code / output / repeat it would maybe have stumbled on the answer but it was way off.


This is one of several shortcomings I’ve encountered in all major LLMs. The llm has consumed multiple versions of SDKs from the same manufacturer and cannot tell them apart. Mixing up apis, methods, macros, etc. Worse is that for more esoteric code with fewer samples, or more wrong than right answers in the corpus means always getting broken code. I had this issue working on some NXP embedded code.


Human sites are also really bad about mixing content from old and new versions. SO to this day still does not have a version field you can use to filter results or more precisely target questions.


We have MCP servers to solve that (and Context7)


I see those as carefully applied bandaids. But maybe that’s how we need to use AI for now. I mean we’re burning a lot of tokens to undo mistakes in the weights. That can’t be the right solution because it doesn’t scale. IMO.


Yesterday I searched how to fix some windows issue, google AI told me to create a new registry key as a 32 bit value and write a certain string in there.


Google's search LLM I think is not very good.

    > Counterpoint: AI has help me refactor things where I normally couldn’t. Things like extracting some common structure that’s present in a slightly different way in 30 places, where cursor detects it, or suggesting potential for a certain pattern.
I have a feeling this post is going to get a lot of backlash, but I think this is a very good counterpoint. To be clear: I am not here to shill for LLMs nor vibe coding. This is a good example where an "all seeing" LLM can be helpful. Whether or not you choose to accept the recommendation from the LLM isn't my main point. The LLM making you aware is the key value here.

Recently, I was listening to a podcast about realistic real world uses for an LLM. One of them was a law firm trying to review details of a case to determine a strategy. One of podcasters (sp?) recoiled in horror: "An LLM is writing your briefs?" They replied: "No, no. We use it generate ideas. Then, we select best." It was experts (lawyers, in this case) using an LLM as a tool.


I'll quote one of the wisest people in tech I know of, Kentaro Toyama:

"Technology acts as an amplifier of human intentions"

...so, if someone is just doing a sloppy job, AI-assisted vibe coding will enable them to do it faster.


From bill cosby’s “Himself” live standup recording from the early 1980s: “Cocaine intensifies your personality … but what if you’re an asshole?”


If you couldnt do the task youself (a very loaded statement which I honestly dont't believe), how could you even validate what llm did was correct, didnt miss anything, didnt introduce a nasty corner case bug etc?

In any case a very rare and specific corner case you mention, a dev can go on a decade or two (or lifetime or two) without ever experiencing similar requirement. If it should be a convincing argument for almighty llm it certainly isnt.


I've found it can help one get the confidence to get started, but there are limits and there is a point where a lack of domain knowledge and a desired target (sane) architecture will bight you hard.

You can have AI almost generated anything, but even AI has limited to understanding requirements, if you cannot articulate what you want very precisely, it's difficult to get "AI" to help you with that.


Is cursor _that_ good on monorepo? My use with AI so far has been the chat interface. I provide a clear description of what i want and manually copy paste it. Using copilot, I couldn't buy their agentic mode nor adding files to the chat' session context. Gemini' large context has been really good handling large context, but still doesn't help much in refactoring.


Cursor is good as retrieving the appropriate context and baking it into your request, which significantly improves responses and reduces friction. It sometimes pulls generic config or other stuff that I might miss to include in a first attempt


If you can feed it the right context. Though I’m sure it’s more the model than Cursor doing the work. Cursor just makes it easy to apply and review.


What do you think is going to happen when management is pushing on deadlines?


The same thing as always: either a strong tech voice convinces them to invest the required time or corners are cut and we all cry later. But I don’t see how that is made better or worse by AI.


AI impacts the expectations that drive the deadlines and the size of the corners that have to be cut.


It is being accelerated by definition because people can ship more slop, faster


You're more likely to meet the deadline and refactoring it later is easier, obviously depending on various factors...


100% this. I did this many times too. Often I wouldn’t bother with cleanup or refactor before, but now it’s way easier, faster and cheaper to do it.

And it’s better in the long run.


Any actual evidence of it being better? Because publicly available evidence points to the contrary, where it repeatedly fails to complete the most basic of tasks.

https://news.ycombinator.com/item?id=44050152


This is just not being reported yet in mainstream media. ie, that the emperor is only wearing undergarments (he is not fully naked).

Part of the reason is that the reporters are themselves not in the trenches coding to be skeptical of the claims they hear.


I meant "better for me on the projects in the long run, since I can do refactoring cheaper"


It's better than him.


> English (or all spoken human language) is not precise enough to articulate what you want your code to do

Exactly. And this is why I feel like we are going to go full circle on this. We've seen this cycle in our industry a couple times now:

"Formal languages are hard, wouldn't it be great if we could just talk in English and the computer would understand what we mean?" -> "Natural languages are ambiguous and not precise, wouldn't it be great if we could use a formal langue so that the computer can understand precisely what we mean?"

The eternal hope is that someday, somehow, we will be able to invent a natural language way of communicating something precise like a program, and it's just not going to happen.

Why do I think this? Because we can't even use natural language to communicate unambiguously between intelligent people. Our most earnest attempt at this, the law, is so fraught with ambiguity there's an entire profession dedicated to arguing in the gray area. So what hope do we have controlling machines precisely in this way? Are future developers destined to be equivalent to lawyers, who have to essentially debate the meaning of a program before it's compiled, just to resolve the ambiguities? If that's where this ends up, I will be very sad indeed.


> The eternal hope is that someday, somehow, we will be able to invent a natural language way of communicating something precise like a program, and it's just not going to happen

My take is more nuanced.

First, there is some evidence [1] that human language is neither necessary nor sufficient to enable what we experience as "thinking".

Second, our intuition, thinking etc are communicated via natural languages and imagery which form the basis for topics in the humanities.

Third, from communication via natural language - slowly emerges symbolism, and formalism which codifies intuitions in a manner which is operational, and useful.

As an example, socratic dialog was a precursor to euclidean geometry which operationally codifies our intuitions of space around us in a manner which becomes useful.

However, formalism is stale as there are always new worlds we experience which cannot be captured by any formalism. The genius of the human brain which is not yet captured in LLMs is to be able create symbolisms of these worlds almost on demand.

ie, if we were to order in terms of expressive power, it would be something like:

1) perception, cognition, thinking, imagination 2) human language 3) formal languages and computers codifying worlds experienced via 1) and 2)

Meanwhile, there is a provocative hypothesis [2] which argues that our "thinking" process lies outside computation as we know it.

[1] https://www.nature.com/articles/s41586-024-07522-w [2] https://www.amazon.com/Emperors-New-Mind-Concerning-Computer... [3] https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667...


> Are future developers destined to be equivalent to lawyers, who have to essentially debate the meaning of a program before it's compiled, just to resolve the ambiguities?

Future developers? You sound like you've never programmed in C++.


> The eternal hope is that someday, somehow, we will be able to invent a natural language way of communicating something precise like a program, and it's just not going to happen

what we operationally mean by "precise" involves formalism. ie, there is an inherent contradiction between precise, and natural languages.


>> The eternal hope is that someday, somehow, we will be able to invent a natural language way of communicating something precise like a program, and it's just not going to happen.

I think the closest might be the constructed language "Ithkuil". Learning it is....difficult, to put it mildly.

https://ithkuil.net/


I hear formal Sanskrit has precise logical semantics


I would rather look into the direction of Lojban.


it doesn't have to be precise enough anymore though because a perfect AI can get all the details from the existing code and understand your intent from your prompt.


Every time someone points out the fundamental limitation of natural languages vs. formal languages, there's another zealot quick to reply with a logic defying concept of a plan.


  "Forty-two!" yelled Loonquawl. "Is that all you've got to show for seven and a half million years' work?"

  "I checked it very thoroughly," said the computer, "and that quite definitely is the answer. I think the problem, to be quite honest with you, is that you've never actually known what the question is."

  "But it was the Great Question! The Ultimate Question of Life, the Universe and Everything!" howled Loonquawl.

  "Yes," said Deep Thought with the air of one who suffers fools gladly, "but what actually is it?"

  A slow, stupefied silence crept over the men as they stared at the computer and then at each other.
  
  "Well, you know, it's just everything... everything..." offered Phouchg weakly.
  
  "Exactly!" said Deep Thought. "So once you do know what the question actually is, you'll know what the answer means."


I agree with this. People use natural language all the time for programs. Talking to a teammate when pair programming, clients asking for features in language to developers. Details are there in the existing code and through a running conversation. This is why I like a lot to code with ChatGPT over a copilot type of setup.


It ends up being a collection of disorganized business problems converted into code, without any kind of structure.

this matches the description of every codebase (except one) I came across in my 30-year career


While I have a similar experience with, hurm, "legacy" codebase, I gotta say, LLM (in my experience) made the "legacification" of the codebase way, way faster.

One thing especially, is the loss of knowledge about the codebase. While there was always some stackoverflow-coding, when seeing a weird / complicated piece of code, I used to be able to ask the author why it was like that. Now, I sometimes get the answer "idk, its what chatgpt gave me".


At least LLM write a huge amount of comments /s


I call this the Peter Principle of Programming:

"Code increases in complication to the first level where it is too complicated to understand. It then hovers around this level of complexity as developers fear to touch it, pecking away here and there to add needed features."

http://h2.jaguarpaw.co.uk/posts/peter-principle/

Seems like I'm not the first one to notice this either:

https://nigeltao.github.io/blog/2021/json-with-commas-commen...


Then you had some very shitty jobs. Good teams put time and effort in architecture before development.

LLMs will produce code with good structure, as long as you provide that architecture before hand.


Most teams are understaffed and under pressure to deliver. Agile is all the rage so why spend time developing architecture when you can yolo it during the implementation phase?


> Good teams put time and effort in architecture before development.

I can only accept that as true if I also accept the fact that the vast majority of jobs won’t be on a good team


Good teams are few and far between. I guess most of them are made over time.


But that is for different reasons. There are definitely human created codebases that are clean, that no AI could match.


    > except one
Please share more!


You're not wrong that AI is hyped just like crypto and VR were. But it's also true that automation will increasingly impact your job, even for jobs that we have considered to be highly technical like software engineering.

I've noticed this over the last decade where tech people (of which I am one) have considered themselves above the problems of ordinary workers such as just affording to live. I really started to notice this in the lead up to the 2016 election where many privileged people did not recognize or just immediately dismissed the genuine anger and plight of working people.

This dovetails into the myth of meritocracy and the view that not having enough money or lacking basic necessities like food or shelter or a personal, moral failure and not a systemic problem.

Tech people in the 2010s were incredibly privileged. Earnings kept going up. There was seemingly infinite demand for our services. Life was in many ways great. The pandemic was the opportunity for employers to reign in runaway (from their perspective) labor costs.

Permanent layoff culture is nothing more than suppressesing wages. The facade of the warm, fuzzy Big Tech employer is long gone. They are defense contractors now. Google, Microsoft or Amazon are indistinguishable from Boeing, Lockheed Martin and Northrop Grumman.

So AI won't immediately replace you. It'll start by 4 engineers with AI being able to do the job that was previously done by 5. Laying off that one person saves that money directly but also suppresses the wages of the other 4 who won't be asking for raises. They're too afraid of losing their jobs. Then it'll be 3. Then 2.

A lot of people, particularly here on HN, are going to find out just how replaceable they are and how aligning with the interests of the very wealthiest was a huge mistake. You might get paid $500K+ a year but you are still a worker. Your interests align with nurses, teachers, baristas, fast food workers and truck drivers, not the Peter Thiels of the world.


Whoosh noises, flashback bells...

In the future no one will have to code. Well compile the business case from UML diagrams!


I think engineers are more like doctors / lawyers, who are also both contracted labor, whose wages can be (and have been) suppressed as automations and tactics to suppress wages arrived.

But these groups also don't have strong unions and generally don't have the class consciousness you are talking about, especially as the pay increases.


You and I seem to be on the same page here but I want to take this opportunity to the role "middle class" plays in preventing class consciousness.

The "middle class" is propaganda we've been fed for decades by our governments, the media and the very wealthy. It's just another way to pit workers against one another, like how the flames of white supremacy were fanned after the slaves were freed so poor whites wouldn't socially align with freed slaves. It's why politics now tends to focus on socially divisive issues rather than economics, like abortion, LGBTQIA+ people, migrants, Islamophobia, etc.

Doctors are workers. Lawyers are workers. Professional athletes are workers.


Except that, at least in the US, even factory workers could arguably be considered middle class.

They could buy/build a decent home in a safe neighborhood, had decent health care, good schools for their kids, disposable income for leisure.

Now, every single fucking productivity gain is going for the finance overlords.


I think you’re missing a pretty key part of class discourse: the bourgeoisie (idea and practice).


There have always been segments of the working class, which have deluded themselves into believing that by co-operating with the capitalists, they could shield themselves from the adverse effects of what is happening to the rest of the working class.

And it's an understandable impulse, but at some point you'd think people would learn instead of being mesmerized by the promise of slightly better treatment by the higher classes in exchange for pushing down the rest of the working class.

Now it's our turn as software engineers to swallow that bitter pill.


>but at some point you'd think people would learn instead of being mesmerized by the promise of slightly better treatment

They have already learned, fortunately the entire 20th century was devoted to this. That is why any worker with even a little bit of brain and ability to learn perceives socialists as the main threat to his well-being.


Well given that you're the first in this subthread to even mention socialism, that lesson of the 20th century would probably ring true, yes. Although I must admit that talking about classes like that probably did help conjure that idea.

Of course, it's not like one needs to be a Marxist or even any other sort of a socialist to see the whole "employers are screwing their employees", since I do doubt that many employees working in Amazon's tech like AWS and whatnot would be in the ideology. In fact, that has become a fairly popular position even outside of traditionally leftist politics.


> fairly popular position even outside of traditionally leftist politics

Only leftist politics will actually be able to address the issues. Of course people will just mindlessly scream "sOcIaLiSm!" at anything and everyone, but it ultimately doesn't matter. We still have to be optimistic, once enough people have the vocabulary to think about their increasingly parlous circumstance, things will change in our direction inevitably.


well, in all honesty, a lot of code that actually works in production, and delivers... goods, services, etc. is an over patched mess of code snippets. that sad part is that cpus are so powerful that it works.


Software is a gas that expands to fill its container.


Probably closer to compacting cats into a room.


Yeah, this isn't new, but it certainly seems worse now


This argument ignores the fact that AI makes it so much easier to ship a ton of garbage.

It’s like comparing a machine gun to a matchlock pistol and saying they’re the same thing.


point is people been shipping/shitting tons of shit...ty code in production for very long now. only matters in very large cyber conflicts.


In 20 yeas in this market I saw a lot of this. A about 6 years back it was the blockchain crazy.

My boss wanted me to put blockchain in everything (so he could market this to our clients). I printed a small sign and left on my desk. Everytime someone asked me about blockchain, I would point the sign "We don't need blockchain!"


Every 5 years or so, there's some new thing that every uncreative product lead decides they absolutely must have, for no other reason than that it's the "in" thing. Web is hot. We need to put our product on the web. Mobile is hot. We need a mobile app. 3D is hot. We need to put 3D in our product. VR is hot. We need to put VR in our product. IoT is hot. We need to put IoT in our product. Blockchain is hot. We need to put blockchain in our product. AI is hot. We need to put AI in our product. It'll keep going on long after we're out of the business.


This is professionalism. Appeasement is dishonest.


If you have to be so precise in your prompts to the point you're almost using a well specified, domain-specific language, you might as well program again because that's effectively the same thing.


One simile I've heard describing the situation where fancy autocomplete can no longer keep track of its patches is that you'll be sloshing back and forth between bugs. I thought it was quite poetic.


Makes me wonder if we’ll see more emphasis on loosely coupled architecture as a result of this. Software engineers maintain the structure, and AI codes it chaos at the leaf. Similar to how data engineers commoditized data via the warehouse


A hype is over promising and under delivering.

AI is extremely new, impressive and is already changing things.

How can that be in any way be similar to crypto and VR?


You can see that in this article referencing Jassy's 4,500 years of effort. Which is said "sounds crazy but it's true!"

It isn't true.

The tool he's bragging about most went about changing JDK8 to JDK17 in a build config file, and if you're lucky tweaking log4j versions. 4,500 years my ass. It was more regex and AI


This was not even imaginable at all just a few years ago.

It doesn't matter if it's not working perfect yet. It's only a few years got 3 came out.

Having a reasonable chat with a computer was also not possible at all. AI right now already feels like another person.

You bought gasoline for the first cars in a pharmacy.


Sed was released in 1973. Changing a number in a config file with regex has been easily possible for over fifty years


Agreed. I can't think of another area where AI could amplify the effect of my existing level of knowledge as it does with coding. It's far exceeding my expectations.


So, the internet was great until Facebook showed up.

Facebook of AI is coming and it's going to be much, much worse. and it'll still contain as you describe.


Yes, it's new and impressive and changing things, but nevertheless it's still underdelivering because the over-promising is out of control. They are selling these things as Ph.D. level researchers that can ace the SAT, pass the MCAT, pass the Bar, yet it still has trouble counting the R's in "strawberry".


The strawberry thing is solved.

Nonetheless it's different things.

Just because they struggle with that makes all the other things wrong? Underwhelming?

I don't think so.


It's not solved, I had tried it across the latest versions of several different AIs before I posted, because I anticipated a reply like yours.

> Just because they struggle with that makes all the other things wrong?

No, it makes their grandiose claims very tenuous. But for me, yes I'm very underwhelmed by what AI is capable of. I think it's a useful tool, like a search box, but that's it at this point. That they are branding it as a Ph.D. researcher is just blowing smoke.


I tried it too and I know this test for a while.

Not only did Claude respond correctly, I also wrote it in German.

And I often switch between English and German for single words just because ai is already that good.

This wasn't even imaginable a few years back. We had 0 technology for that.

I created single page html Javascript pages for small prototypes withhin 10 minutes with very little repromting.

I can literally generate an image of whatever I want with just text or voice.

I can have a discussion with my smartphone with voice.

Jules detected my programming language, how to build and running it by me writing 'generaten GitHub action for my project and add a basic test to it's

I don't get how people can't be amazed by these results.

What are you normally do when you try it out?


> I tried it too and I know this test for a while. Not only did Claude respond correctly, I also wrote it in German.

You're missing the point though. If it worked for you when you tried it, that's great, but we know these tools are stochastic, so that's not enough to call it "solved". That I tried it and it didn't work tells me it's not. And that's actually worse than it not working at all, because it leads to the false confidence you are expressing.

The strawberry example highlights like all abstractions, AI is a leaky one. It's not the oracle they're selling it as, it's just another interface you have to know the internal details of to properly use it beyond the basics. Which means that ultimately the only people who are really going to be able to wield AI are those who understand what I mean when I say "leaky abstraction". And those people can already program anyway, so what are we even solving by going full natural language?

> I created single page html Javascript pages for small prototypes withhin 10 minutes with very little repromting.

You can achieve similar results with better language design and tooling. Really this is more of an indictment of Javascript as a language. Why couldn't you write the small prototype in 10 minutes in Javascript?

> I can literally generate an image of whatever I want with just text or voice.

Don't get me wrong, I enjoy doing this all the time. But AI images are kind of a different breed because art doesn't ever have to be "right". But still, just like generative text, AI images are impressive only to a point where details start to matter. Like hands. Or character consistency. Multiple characters in a scene. This is still a problem in latest models of AI not doing what you tell it to do.

> I can have a discussion with my smartphone with voice.

Yes, talking to your phone is a neat trick, but is it a discussion? Interacting with AI often feels more like an improv session, where it just makes things up and rolls with vibes. Every time (and yes I mean every) I ask it about topics I'm an expert about, it gets things wrong. Often subtly, but importantly still wrong. It feels like less of a discussion and more like I'm being slowly gaslight. This is not impressive, it's frustrating.

I think you get my point. Yes, AI can be impressive at first, but when you start using it more and more you start to see all the cracks. And it would be fine if those cracks were being patched, or even if they just acknowledged them and the limitations of LLMs. But they won't do that; instead what they are doing is pretending they don't exist, pretending these LLMs are actually good, and pushing AI into literally everything and anything. They've been promising exponential improvements for years now, and still the same problems from day 1 persist.


I believe we are seeing the new car, Internet etc

And we live now in such a fast pace that it feels like Ai should be perfect already and it's of course not.

I also see that an expert can leverage Ai a lot better because you still need to know enough to make good things without.

But Ai progresses very fast and still has achieved things were we have not had any answer than before.

What it can already do is still very aligned of what I assume/expect it from the current hype.

Llama made our ocr 20% better just by using it. I prompted plenty of code snippets and stuff which saved me time and was fun using it. Including python scripts for a small ml pipeline and I normally don't write python.

It's the first tech demo ever which people around me just 'got' after I showed it to them.

It's the first chatbot I have seen which doesn't flake out after my second question.

Chatgpt pushed billions into new computer. Blackwell is the first chip to hit the lower estimation for brain compute performance.

It changed the research field of computer linguistics.

I believe it's fundamental to keep a very close eye on it and trying things out regular otherwise it will over roll us suddenly.

I really enjoy using Claude.

Edit: and eli5 on research paper. That's so so good


You're reinforcing the disparity that I'm pointing out in my original reply. My expectations for AI are calibrated by the people selling it. You say "AI progress is very fast" but again, I'm not seeing it. I'm seeing the same things I saw years ago when ChatGPT first came on the scene. If you go back to the hype of those days, they were saying "Things will change so rapidly now that AI is here, we will start seeing exponential gains in what we can accomplish."

They would point to the increasing model sizes and the ability to solve various benchmarks as proof of that exponential rise, but things have really tapered off since then in terms of how rapidly things are changing. I was promised a Ph.D. level researcher. A 20% better result on your OCR is not that. That's not to say it's a good thing and an improvement, but it's not what they are selling.

> Blackwell is the first chip to hit the lower estimation for brain compute performance.

What does that even mean? That's just more hype and marketing.


Nope not marketing. I researched this topic myself. I'm not aware that Nvidia mentioned this.

But hey it seems I can't convince you of my enthusiasm regarding ai. It's fine I still will play around with it often and looking forward to it's progress.

Regarding your researcher: NotebookLM is great and you might need to invest a few hundred bucks to really try out more.

We will see anyway we're it's going


Your enthusiasm for AI is just more hype noise as far as I'm concerned. You say you've "done research" but then don't link it for anyone to look into, adding to the hype. Brain compute performance is not a well-understood topic so saying some product approaches it is pure marketing hype. It's akin to saying the brain is a neural net.

If I could afford it I'd quite happily employ a Ph.D. level researcher with inability to count the R's in strawberry. The real problem is in accurately characterising where these models fail in relatively subtle task-relevant ways. Hopefully people work that out before they are universally deployed. The overpromising is definitely a problem.


> A hype is over promising and under delivering.

Indeed, which is exactly what's happening with AI.


I feel you and would personally pay a bit more for LLMs trained on 'senior code'.


Yeah, I feel like a lot of the code they spit out is tutorial-level code you find on beginner websites.


Not directly related, but an anecdote: well before AI, I was talking to a Portfolio Solutions Manager or something from JP Morgan. He was an MD at the firm and very full of himself. He told me, "You guys, your job is....you just Google search your problem and copy paste a solution, right?". What I found hilarious is that he also told me, "The quants, I hate that they keep their C++ code secret. I opened up the executable in Notepad to read it and it was just gibberish". Lesson: people with grave incompetence at programming feel completely competent to judge what programming is and should be.


My own tangential gripe (a bit related to yours though): the factory work began when Agile crept into the workplace. Additionally, lint, unit tests, code reviews... all this crap just piled on making programming worse still.

It stopped being fun to code around that point. Too many i's to dot to make management happy.


If you give up on unit tests and code review then the code is "yours" instead of "ours" and your coworkers will not want to collaborate on it with you.

However, this has to be substantive code review by technical peers who actually care.

Unit tests also need the be valued as integral to the implementation task. The author writes the unit tests. It helps to guide the thought process. You should not offload unit tests to an intern as "scutwork".

If your code is sloppy, a stylistic mess, and unreviewed, then I am going to put it behind an interface as best I can, refer to it as "legacy", rely on you for bugfixes (I'm not touching that stinking pile), and will probably try to rally people behind a replacement.


In my experience that did not happen. I've been lucky perhaps to always work with engineers I trusted.

And frankly, giving ownership to code ("it's yours") has, also in my experience, been an excellent way to give an engineer "pride of ownership". No one wants to have that "stinking pile".


Agile yes, it's micromanagement at scale. But writing tests and doing code reviews is good practice.


We used to do a kind of unit tests in place. (We called them param checking.)

In my experience unit tests have been simply a questionable yardstick management uses to feel at ease shipping code.

"98% code coverage with unit tests? Sounds good. It must be 98% bug-free — ship it."

Not that anyone ever exactly said that but that's essentially what is going on.

Code reviews seem to bring out the code-nazi types. Code reviews then break any goodwill between team members.

I preferred when I would go to a co-workers office and talk through an issue, come up with a plan to solve the problem. We trusted the other to execute.

When code reviews became a requirement it seemed to suck the joy out of being a team.

Too frequently a code review would turn into "here's how I would have implemented it therefore you're wrong, rewrite it this way, code does not pass."

Was that a shitty code reviewer? Maybe. But that's just human nature — the kind of behaviors that code "gate keeping" invites.


Everywhere I’ve worked I’ve found that the entire organization pays lip service to unit tests and code reviews, but then sets timelines so short and workloads so high that make real tests and reviews genuinely impossible.


Not in my experience. Maybe I've been lucky. Try working for bigger, established firms for better chances.


Microsoft and Facebook not large and established enough?


Yep, add Apple. (Although it varies across teams so I can only speak for my own experiences.)

Once upon a time (my career there was 26 years) code reviews, unit tests were alien. I enjoyed my job much more then.


Can echo that - it's pure lip service - the deadlines are arbitrary and change often, and nobody cares about quality beyond "can it appear to work plausibly at a demo". There are notable exceptions - nobody wants to be the next Knight Capital but a lot of work is not seen to be that critical so off to the next ticket it is.


TDD, while a good idea, is never used because it is highly unnatural when the stakes are low, which they are most of the time. And when the stakes are high enough - you'd want to hire a real QA engineer to write all the tests anyway, including the unit tests.


Code review is another sacred process that seems too good not to have, but many teams use it as a "we care about quality" stamp when in fact they do not. Used for just nitpicking code style (important but not the whole reason to have CR, and there are tools for this), issue comments like "LGTM" and approve whatever arrives at the pull request anyway.


I've not yet seen code review implemented in a good way in places I have worked. It's not really considered "real work" (may result in zero lines of code change) and it takes time to properly read through code and figuring out where the weaknesses might be. I just end up being forced to skim read for anything obvious and merging, because there is not enough time to review the code properly.


As a manager, code review has two benefits that typically matter to me: (a) cost: it's cheaper to fix a defect that hasn't shipped (reading tests for missing cases is a useful review, in my experience); (b) bus-factor: make sure someone else has a passing familiarity with the code. And some ancillary (and somewhat performative benefits) like compliance: your iso-27001, soc-2 change control processes likely require a review.

It's hard, though, to keep code reviews from turning into style and architecture reviews. Code reviewing for style is subjective. (And if someone on the team regularly produces very poor quality code, code review isn't the vehicle for fixing that.) Code reviewing for architecture is expensive; settle on a design before producing production-ready code.

My $0.02 from the other side of the manager/programmer fence.


Out of interest, how are you using code reviews to be ISO-27001 compliant?

ISO-27001's change management process requires that [you have and a execute a change management policy that requires that] changes are conducted as planned, that changes are evaluated for impact, and are authorized. In my experience, auditors will accept peer-review as a component of your change management procedure as a meaningful contributor to meeting these requirements.

"All changes are reviewed by a subject matter expert who verifies that the change meets the planned activity as described in the associated issue/ticket. Changes are not deployed to production environments until authorized by a subject matter expert after review. An independent reviewer evaluates changes for production impact before the change is deployed..."

If you are doing code review already, might as well leverage it here.


Code review where I worked seem to either in practice be rubber stamping or back scratching. Never once have I felt the need for it. If people are unsure about a change they ask usually.


If teams care about each other’s code, they ought to collaborate on its design and implementation from the start. I’ve come to see code reviews (as a gate at the end of some cycle) as an abdication of responsibility and the worst possible way to achieve alignment and high quality. Any team that gets to the end of a feature without confidence that it can be immediately rolled out to create value for users has a fundamentally flawed process.


> they ought to collaborate on its design and implementation from the start

That's exactly right. After said process, it comes down to trusting your coworkers to execute capably. And if you don't think coworker is capable, say so (or if they're junior, more prudently hand them the simpler tasks — perhaps code review behind their back and let it go if the code is "valid" — even if it is not the Best Way™ in your opinion.)


Only if there aren't QA boards with quality KPIs to fulfill, and many code reviews are wasted time in ceremonies to fulfill egos from reviewers about the only true way how to deliver software.

I usually give up, stop arguing why it is actually better than the way the gatekeepers suggest and redo my code, less time wasted.


What's wrong with writing tests? I sleep well at night when we push to production because of our robust test suite.


Writing good tests is an art. Its hard. It takes a deep understanding of _how_ the system is implemented, what should be tested and what should be left alone.

Coverage results don't mean much. Takes some experience to know how easy it is to introduce a major bug with 100% test coverage.

Tests are supposed to tell you if a piece of code works as it should. But I have found no good way of judging how well a test suite actually works. You somehow need tests for tests and to version the test suite.

A overemphasis on testing also makes the code very brittle and a pain to work with. Simple refactorings and text changes need dozens of tests to be fixed. library changes break things in weird ways.

Unless I know the system being tested, I take no interest in tests.

There's clever hacky ways to test systems that will never pass the "100% coverage" requirement and are a joy to work with. But they're the exception.


The point about coverage results is an important one to understand. Something that I like to say when discussing this with other folks is that while high code coverage does not tell you that you have a good test suite, low code coverage does tell you that you have a poor one. It's one metric amongst many that should be used to measure your code quality, it's not the end-all-be-all.


code coverage is a bad metric either way. soon as it gets mentioned anywhere, an mba manager wants it as close to 100 as possible and goodhart's law kicks in.

it's synonymous with LOC. don't bring it up anywhere.


There are techniques to keep test quality high.

However, usually no one really cares about testing at all. Also many projects are internal, not critical, etc.

Make fast, break things, deliver crappy software.


Tests that you write in order to contribute to a robust test suite are good.

Tests that are written to comply with a policy that requires that all components must have a unit test, and that test must be green, could be good. Often, they are just more bullshit from the bullshit factory that is piling up and drowning the product, the workers, the management, and anyone else who comes too close.

I feel that it’s still correct to call both of these things tests, because in isolation, they do the same thing. It’s the structure they’re embedded in that is different.


Becuase builds are gated by test coverage people write tests for coverage and not for functionality. I’d say a good portion of the inherited tests I’ve ran in to wouldn’t catch anything meaningfully breaking in the function being tested.


Your issue is with targeting a metric then (coverage), not the unit tests. Good unit tests can be so useful. I've got a project currently that can't be run locally because of some dependencies, and coding against unit tests means I get to iterate at a reasonable speed without needing to run all code remotely.


I spent 3 years getting a Ruby codebase to 100% branch coverage running locally in a few minutes (I wasn't just looking at coverage, I was also profiling for slow tests). Found a few bugs ofc having to read through code so carefully. The value was having a layer of defence while refactoring, if some unrelated test failed it implied you missed impact of your change. It also helped people avoid the issue of making changes to an area of code with no testing, where existing tests act as docs (which execute, so won't go stale as easily) & make it easier for new code to write new tests building on existing tests

This codebase was quick to deploy at Microsoft. We'd rollout every week. Compared to other projects that took months to rollout with a tangling release pipeline

Anyways I left for a startup & most of this fast moving team dissolved, so the Ruby codebase has been cast aside in favor of projects with tangling release pipelines

https://techcommunity.microsoft.com/blog/adforpostgresql/how...


From my perspective it's not "tests" but this reaction. There's nothing wrong with tests, but there certainly is a cost to them, are you getting a positive ROI? Has the system been perverted to focus on tests vs. tests supporting quality? Are tests used to justify all sorts of unrelated actions or inaction? Now repeat this exercise with 100 or 1000 other perfectly valid concepts that can help destroy the the very thing that you are trying to accomplish.


Ha, this sounds like my work. I've developed and evolved a java set of apps that integrate our core banking system with few tens of other internal apps.

In a decade and a half, we had very few issues, all easy to handle, and ie app has its own clustering via Hazelcast so its pretty robust with minimal resources. Simply nothing business could point a finger to and complain about. Since it was mostly just me, a pretty low cost solution that could be bent to literally any requirement pretty quickly.

Come 2025, now its part of agile team and efforts, all runs on openshift which adds nothing good but a lot of limitations, we waste maybe 0.5-1md each week just on various agile meetings which add 0 velocity or efficiency, in fact we are much slower (not only due to agile, technology landacape became more and more hostile to literally any change, friction for anything is maasive compared to a decade ago and there is nothing I can do with that).

I understand being risk averse against new unknown stuff, but something that proved its worth over 15 years?

Well it aint my money being spend needlessly, I dont care and find life fulfillment completely outside of work (the only healthy approach for devs in places like banking megacorps). But smart or effective it ain't.


A lot of people never learned how, and now they just avoid doing it whenever possible.

It's really frustrating; I'm all for some bit of code not needing a test, but it should be because the code doesn't need to be unit tested. Breaking unit testing, not knowing how to fix it, and removing all testing is not a good reason.


Testing in production happens. This is for example the best practice at SpaceX or at Tesla (FSD, Robotaxi, Unboxed designs, etc), and I think these people sleep very well.

Yes, of course, some rockets may explode (almost 10 soon), or some people may have accident, but that's ok from their perspective.


I never found linting or writing unit tests to be particularly un-fun, but I generally really really value correctness in my code, and both of those things tend to help on that front.


I think unit tests are perfect for you then.

Unfortunately, management often dictates this for all engineers.


It turns out that doing a good job at work is more important than having fun.


You might get some pushback there. But obviously I would choose both. (And believe we had both before management started dictating how we coded.)


In my experience shops that dont have testing guidelines end up with untested code, and in my experience untested code always has bugs.


I used to work in aerospace R&D. The number of times I heard some variant of “it’s just software” to disregard a safety critical concern was mind boggling. My favorite is a high-level person equating it to writing directions on a napkin.


> Lesson: people with grave incompetence at programming feel completely competent to judge what programming is and should be.

As an employee at a company with a similar attitude, I cannot agree more with this.


Hubris and fragile egos run amok

A burning need to dominate in a misguided attempt to fill the gaping void inside

Broken and hurting people spreading their pain as they flail blindly for relief

Our world creaks, cracks splintering out like spider thread

The foundations tremble


Poetic


It doesn't help that most "tech visionaries" or people considered tech bros these days more often come from accounting or legal background than anything technical. They are widely perceived as authorities but come without the expertise. This is why it's so perplexing for the techies when the industry gets caught up in some ridiculous hype cycle apparently neglecting the physical realities.


With all the changes coming up, I am happy that I am retiring soon. Since I started in the 90s, SW dev has become more and more tightly controlled and feels more like an assembly line. When I started, you could work for weeks and months without much interruption. You had plenty of time for experimentation and creativity. Now everything is ticked based and you constantly have to report status and justify what you are doing. I am sure there will always be devs who are doing interesting work but I feel these opportunities will be less and less.

In a way it's only fair. Automation has made a lot of jobs obsolete or miserable. Software devs are a big contributor to automation so we shouldn't be surprised that we are finally managing to automate our own jobs away,


> Since I started in the 90s, SW dev has become more and more tightly controlled and feels more like an assembly line. When I started, you could work for weeks and months without much interruption. You had plenty of time for experimentation and creativity. Now everything is ticked based and you constantly have to report status and justify what you are doing

Yeah the consistent "reporting" of "status" on "stand-ups" where you say some filler to get someone incapable of understanding what it is that you're doing off your back for 24 more hours has consistently been one of the most useless and unpleasant parts of the job.


> you say some filler to get someone incapable of understanding what it is that you're doing off your back for 24 more hours has consistently been one of the most useless and unpleasant parts of the job

This sucks for the 50% or so who are like you, but there's another 50% who won't really get much done otherwise, either because they don't know what to do and aren't self-motivated or capable enough to figure it out (common) or because they're actively cheating you and barely working (less common)


> there's another 50% who won't really get much done otherwise, either because they don't know what to do and aren't self-motivated or capable enough to figure it out (common) or because they're actively cheating you and barely working (less common)

Idk I barely ever work with people who are like this, and if people become like this, it's usually obvious to everyone that it's happened and they get a talking to in the office then get shown the door


I assume you are in a country with fire-at-will policies. In Germany you have a job security, you can't just fire people without a reason. The difficulty is actually proving their incompetence or unwillingness to work. Thus in my experience (working a self-employed contractor in Germany) this group is far larger than 50%. Also one of my reasons why no good software comes out of germany (and this includes SAP, as long as you show me a single end-user that is happy with working with SAP software).


The ability to deliver a 30 second summary in a daily standup has very little to do with real productivity.


Yes I work in the US but my company has extreme time wasting practices designed to have engineers explain every thing they do and why it's worth it. I never found this helpful and find it causes more problems than it solves.


Incompetent workers migrating to the US is a big issue.

Why is incapability to make progress at work tasks not a valid reason for job dismissal in Germany? Unless I am misunderstanding something


You have to prove it is the employees fault by intentionally not completing the task. Incompetence or incapability is not the employees fault, because well, you hired them and judged their ability - probably due to their education (it is a little more complex than that of course). As a concrete example, if you have a CS degree from the 80s and job as a COBOL programer and your employer decides to assign you to a new team doing react+js, the employee is still formally qualified. You couldn't fire him for incompetence, just because a 22 year old bootcamp graduate delivers 10x the results.


Those people should not have been hired, or should have been let go at 6 months when that became obvious. The real solution to this problem doesn't fit with most management methodology though.


It only makes sense to fire someone who needs to be nudged into getting their job done if you can find a replacement at a similar cost who doesn’t.


1+1 is often less than 1.

The mediocre unmotivated person is dragging down the other, killing their motivation. You'd be better off without them even if you couldn't replace them.


Just if there were a way to filter people at hiring and perhaps some way to choose to not employ them after doing so...


>> Yeah the consistent "reporting" of "status" on "stand-ups" where you say some filler to get someone incapable of understanding what it is that you're doing off your back

In my experience it is human nature to think you are doing something that people around you can't or don't understand. The graveyard is full of irreplaceable people is an old saying. Sometimes the people you report to are morons, but if you consistently report to a moron its time for introspection. There's more that you can do than just suffer that. One place to start is to have some charity for the people you work with.


> graveyard is full of irreplaceable people

I am not special and make no claims of it; I am entirely replaceable and I'd make no claims to the contrary.

This has nothing to do with me or anyone like me, and everything to do with the "adult daycare" style of project managers.

I'm tired of re-iterating to non-technical project managers that status of tickets, why things are "blocked" or why the ask isn't feasible given constraints, over and over again. Time is a flat-circle.

If they understood the problem scope better, such questions would not arise. I know this from experience.

The majority of them are completely stateless and I'll repeat things daily for weeks on end, explaining the same things over and over again, while they make 0 effort to "unblock" issues.

I've had one good project manager in my career that advocated for his technical staff and understood the both the project and business deeply; he was invaluable and a pleasure to work with.

I've had many many others that served no tangible purpose whatsoever.

My frustration is ostensibly there is a purpose for these jobs beyond employing people with the role of "attending meetings"; I've rarely seen it.


Project Manager here. I wish software were an industry where my job was not needed! Where the software team could just directly tell the boss: "It will be done in 3 months. Trust us, bro!" and they go off to do work without a single status report or a single meeting, and come back exactly 3 months later with a finished, tested and working product, ready for distribution! God that would be so great. I could just chill out, tell my boss and boss's boss to chill out, that the software team's got this, with 100% confidence that it was true. Or, I could go find some more useful career!

But it's never true. Team A depends on Team B, who is busy with work for Team C, and none of these teams are talking to each other because they're too busy writing code. Team D just lost two people and can't make the date that they promised, which sets Teams E and F back a few months unless we can figure it out. Or they're behind because they up and decided to do a big refactoring in the middle of the project without telling anyone. Or people just estimated poorly, like orders-of-magnitude poorly, and while the marketing team is ready, and the trade shows are scheduled, and the factory is ramping the device that the software should be flashed on, but the software won't be ready for another three months.

I empathize with engineers since I was once one, and can understand why some of them see us as adversarial. We tend to interact with them in places that Software Engineers hate, like in meetings and standups and via "update" E-mail blasts. Or we're sending them JIRA tickets which they also hate. I do my best to shield my teams from these things that I know they don't like, but sometimes they have to happen.


Your username is apt, because working with you sounds like encountering the Crawling Chaos Himself, an eldritch being of pure condescension and disdain who unknowingly makes everybody else's lives miserable because they fundamentally do not respect anybody else except themself. Best of luck to you.


I tried, but I couldn't get a job as a web dev in the 90s because nobody in my area knew what the web was, I'm not sure anyone in my country did.


I also started in the 1990’s and agree the evolution has been as you describe it. It does highly depend on where you work, but the tightly managed JIRA-driven development seems awfully popular.

But I fall short of declaring the 1990s or 2000s or 2010s were the glory days and now things suck. I think part of it is nostalgia bias. I can think of a job I spent 4 years and list all the good parts of the experience. But I suspect I’m forgetting over a lot of mediocre or negative stuff.

At any rate I still like the work today. There are still generally hard challenges that you can overcome, people that depend on you, new technologies to learn about. Generically good stuff.


Thanks for pointing out JIRA. I think the problem comes from needing to keep the codebase running next month while trying to up scalars / numbers, not thinking years ahead or how to improve both inside culture and outside image of a company which are more complex structures with lots of little metrics and interdependent components than a win/loss output or an issue tracker that ignores the fact issues solved != issues prevented.

I guess these strategies boil down to having some MBA on top or an engineer that has no board of MBAs to bow down to. I strive to stay with private owned companies for this reason but ofc these are less loud on the internet, so you can easily miss them while jobhunting.


Another datapoint is working earlier eras sound bad to me: punchcards, assembly, COBOL, FORTRAN. Yes I suspect those people had a blast.


Ex-cobol guy here, the work was a blast! I was working on the Lawson erp for a non-profit, mostly customizing the software for their specific use case. I loved it because the tools were crazy, the language limited, and the system itself was high value to the org. Debugging took forever but the fixes were often really small changes. I often had to go into the database (oracle) and clean up the data by hand. Such fun!

I crave novelty and have a love for bad technology. I was an early nodejs adopter and loved es4 but newer versions of the language is too easy to use lol!


Now everything is ticked based and you constantly have to report status and justify what you are doing.

The weird thing about this is, many developers wanted this. They wanted the theater of Agile, JIRA tickets, points etc.

I'm in the same boat yet still need to squeeze out another 10 years or so but personally working om multiple-side projects so I can get out of this boring, mundane shit.


I think what you are describing is the difference between being among the people to engineer the first car to being in the factory trying to make the 100 millionth car as cheaply as possible. Forty years ago people mostly learned to program because they were interested in it. People starting software / tech companies were taking a chance on something most didn't understand. That self selected for a type. Now its the most common major in school. I am sure its possible to recreate what you felt in the '90s but probably in a different field or a subset of this one.


It’s less automating the job of the engineer away, and replacing management with a panopticon AI


Knowing Amazons backends this is just their technical debt creeping up. Coders do repetetive work there because extending from their architecture means lots of code simply to keep the machine running. I would be surprised if they did not assign ASIN (which come from the need to keep ISBN compatibility everywhere) to not only Apps but other virtual goods lol.

From my integrations pov Ebay was ahead of their time with their data structure and pushed for deprecation fast to not keep the debt. Amazon ooth only looks more modern through acquiring new market fields instantly followed by throwing a lot of money to facade up the mess. Every contact like key account managers there were usually pushed for numbers, this has nothing to do with coders being coders.

Bosses always look for ways to instantly measure coders output which is just short-sighted way of thinking. My coworkers were measured by lines of code obviously. I wonder how you measure great engineering.

So no, this has not changed, you can still work uninterrupted on stuff for months or years if you want and skip these places, maybe even proved over your career that previous designs are stable for years to come.


From what I've seen, devs though that COVID = full remote forever. It's no wonder there's more control now. So it started before AI IMO.


The plague of the two-week sprint.


Is it really that LLM-based tools make developers so much more productive or rather that organizations have found out they can do with less -- and less privileged -- developers? What I don't really see, especially not big tech-internally, are stories of teams that have become amazingly more productive. For now it feels we get some minor productivity improvements that probably do not off-set the invest and are barely enough to keep the narrative alive.


A lot of it is perception. Writing software was long considered somewhat difficult and that it required smart people to do so. AI changes this perception and coding starts to be perceived as a low level task that anyone can do easily with augmentation from AI tools. I certainly agree that writing software is turning more into a factory job and is less intellectually rewarding now.


When I started working in the field (1996), I was told that I would receive detailed specs from an analyst that I would then "translate" into code. At that time this idea was already out of fashion, things worked this way for the core business team (COBOL on the AS/400) but in my group (internal tools, Delphi mostly) I would get only the most vague requirements.

Eventually everyone was expected to understand a good deal of the code they were working on. The analyst and the coder became the same person.

I'm deeply skeptical that the kind of people that enjoy software development are the same kind of people that enjoy steering and proofing LLM generated code. Unlike the analyst and the coder, this strike me as a very different skill set.


> I'm deeply skeptical that the kind of people that enjoy software development are the same kind of people that enjoy steering and proofing LLM generated code. Unlike the analyst and the coder, this strike me as a very different skill set.

indeed. people generally hate foreign/alien code, or rather - love their style too much. it is not hard to recognize this pattern - ive seen it with colleagues, with my students, with some topnotch 10x-coders back in the day. so proofing is a skill one perhaps develops by teaching others do things right, but is not something most people entertain about.

on the other hand, people who lack time and patience to implement complex stuff may benefit from this process. particularly if they are good code-readers, and some seasoned devs become such people. i can see little chance they wont be using llms to spit code out.

but the two groups largely don't overlap and are different as astronomers and astronauts.


I worry a bit about people who like writing code but don’t like reading and debugging it. There are enough “throw it over the wall” coders.


Yeah, AI will kill all mundane brick layering jobs.

The real software engineering role, with architecture, customer management, discovery phase, risk analysis and all the other kind of stuff, not yet.


I have people skills damnit!


I don't mind reading and debugging my own code, or any other code written with a plan by someone with a clue.

Reading and debugging slop code is not the same thing, not even close.


For me it dependa on scale. Asking AI for something small and specific is a joy. Asking it to make a big change is a nightmare I so far only try every time a new model comes out.


It has been a factory job for decades.

Not everyone gets to code the next ground breaking algorithm at some R&D department.

Most programming tasks are rather repetitive, and in many countries there is hardly anything to look up to software developers, it is another blue collar job.

And in many cultures if you don't go into management after about five years, usually it is seen as a failure to grow up on their career.


Of course it is true. The thing was 90% of Amazon engineers made far more money at their job while essentially doing typical enterprise software work. This money led them believe it is some creative work. And now those task management and time monitoring tools are catching up to Amazon IT workers so they are realizing it is similar to another low end IT job/ factory work.


The pay and benefits at Amazon always seemed to offset the shit work/life balance and on-call rotation. What a gauntlet that was. The only engineers that got recognition were those that fixed high profile bugs, preferably after hours. Shipping a feature was always just "business as usual"


There are lots of jobs with terrible work/life balance AND bad pay. so I dont think that explains it


> if you don't go into management after about five years, usually it is seen as a failure to grow up on their career

I don't see how that's possible. Wouldn't such a norm result in something like a 7:1 ration of managers to engineers (i.e., assuming a 40ish year career, the first 5 years are spent as an engineer, and the remaining 35 as a manager)? For team managers, I've generally seen around a 1:10 ratio of engineers to managers. So a 7:1 ratio of managers to engineers just doesn't seem plausible, even including non-people leaders in management.


Have you wondered why japan, which is a powerhouse of electronics and manufacturing does not have any large software companies ? Software is different from manufacturing.

The mindset, mentality, and culture required to do new software for an ambiguous problem is different from the mentality to produce boilerplate code or maintain an existing codebase. The later is pure execution and the former is more like R&D.


"usually it is seen as a failure to grow up on their career."

What does that mean?


It means that the company is more likely to fire that person on the logic that they "failed" to be promoted to management, that they're "treading water" as a developer.


Manager is seen as next step in engineers career instead of side track.


It's been like this for awhile now. Aside from companies like Google and Facebook, most companies are using some CRUD web app where the development consists of gluing code together for multiple third-party services and libraries.

It's these sorts of jobs that will be replaced by AI and a vibe coder, which will cost much less because you don't need as much experience or expertise.


Even before AI I've always had the perception that writing software felt more intellectually on the level of plumbing. AI just feels like a having one of those fancy new tools that tradespersons may use.


What you're describing doesn't sound like something that requires a lot of foreign laborers.


It's been like this for decades.


Yeah that’s what really worries me, many people have been clinging to this ability as something that’s really special and AI is really going to disillusion them.


Organizations have long had a preference for 'deskilling' to something reliable through bureaucratic procedures, regardless of the side effects or even if it results in it costs more due to needing three people where one talented could do it before. Because it is more dependable, even if it is dependably mediocre. Even though this technique may lead to their long-term doom and irrelevance.


Yes and (adjacently):

Seeing Like a State by James Scott

https://en.wikipedia.org/wiki/Seeing_Like_a_State

Explains a lot of the confusing stuff I've experienced, in that eureka sort of way.


I feel like managers are having a heyday over tools like cursor having a user-by-user breakdown on AI code generation stats. I feel this is only the beginning and a whole new world of in-editor workplace monitoring will pop up.


The number of organizations that continue to use tedious languages like Java 8 and Golang...

Like, they hadn't realized they were turning humans into compilers for abstract concepts, yet now they are telling humans to get tf out of the way of AI


Please give some worked examples.

I'm not sure what: "'deskilling' to something reliable through bureaucratic procedures" ... means.

I'm the Managing Director of a small company and I'm pretty sure you are digging at the likes of me (int al) - so what am I doing wrong?


Are you familiar with Taylorism?

From the 19th century onwards, businesses have wanted to replace high-skilled craftsmen with low-skilled workers who would simply follow a repeatable process. A famous example is Ford. Ford didn't want an army of craftsmen, who each knew how to build a car. He wanted workers to stay at one station and perform the same single action all day. The knowledge of how to build a car would be in the system itself, the individual workers didn't have to know anything. This way, the workers have limited leverage because they are all replaceable, and the output is all standardized.

You can see this same approach everywhere. McDonalds for instance, or Amazon warehouses, or call centers.


I'm happy to report that for businesses on the scale as mine, we don't work like that.

We give a shit.


I wonder about codebase maintainability over time.

I hypothesize that it takes some period of time for vibe-coding to slowly "bit rot" a complex codebase with abstractions and subtle bugs, slowly making it less robust and more difficult to maintain, and more difficult to add new features/functionality.

So while companies may be seeing what appears to be increases in output _now_, they may be missing the increased drag on features and bugfixes _later_.


Up until now large software systems required thousands of hours of work and efforts of bright engineers. We take established code as something to be preserved because it embeds so my knowledge and took so long to develop. If it rots then it takes too long to repair or never gets repaired.

Imagine a future where the prompts become the precious artifact. That we regularly `rm -rf *` the entire code base and regenerate it with the original prompts perhaps when a better model becomes available. We stop fretting about code structure or hygiene because it won't be maintained by developers. Code is written for readability and audibility. So instead of finding the right abstractions that allow the problem to be elegantly implemented the focus is on allowing people to read the code to audit that it does what it says it does. No DSLs just plain readable code.


I can imagine that, but... given your prompt(s?) will need to contain all your business rules, someone will have to write prompt(s?) in a way that make it possible for the AI to produce something that works with all the requirements.

Because if you let every stakeholder add their requirements to the prompts, without checking that it doesn't contradict others, you'll end up with a disaster.

So you need someone able to gather all the requirements and translate it in a way that the machine (the AI) can interpret to produce the expected result (a ephemeral codebase).

Which means you now have to carefully maintain your prompts to be certain about the outcome.

But if you still need someone to fix the codebase later in the process, you need people with two sets of skills (prompts and coding) when, with the old model, you only needed coding skills.


I’m concerned that it might not be easy to vibecode a security fix for a complex codebase, especially when the flaw was introduced by vibecoding.


My new favourite genre of schadenfreude are solo-preneur SaaS vibe-coders.

They burn a pile of money. Maybe it’s their life savings, their parents’ money or their friends or some unlucky investors. But they go in thinking they’re going to join the privileged labourers without putting any of the time to develop the skills and without paying for that labour. GenAI the whole thing. And they post about it on socials like they’re special or something.

Then boom. A month later. “Can everyone stop hacking me already, I can’t make this stop. Why is this happening?”

Mostly I feel sorry for the people who get duped into paying for this crap and have their data stolen.

There’s like almost zero liability for messing around like this.


I wonder whether we have the same talk when the C compiler first came out.

People may worry that the "ASM" codebase will be bit-rot and no one can understand the compiler output or add new feature to the ASM codebase.


My guess is that the discussion trended around performance and not correctness since compilers are pretty well understood. Why a LLM output what they do are not understood by anyone to the same degree.


Yes, big-tech-internally I also see a lot of desire to get us to come up with some great AI achievements, but they are so far not achieving far far more than already existing automations and bots and code generators can do for us


Right. What the article is unsurprisingly glossing over (per usual) is that just because AI is perceived (by higher-ups that don’t actually do the work) to speed up coding work doesn't mean it actually does.

and that probably to some extent all involved (depending on how delusional they are) know that it's simply an excuse to do layoffs (replaced by offshoring) by artificially so-called raising the bar to what is unrealistic for most people


For this narrative to make sense you would have to believe that Amazon management cares more about short-term profit than the long-term quality of their work.


The narrative reflects a broader cultural shift, from "we are all in this together" (pandemic) to "our organizations are bloated and people don't work hard enough" (already pre-LLM hype post-pandemic). The observation that less-skilled people can, with the help of LLMs, take the work of traditionally more-skilled people fits this narrative. In the end, it is about demoting some types of knowledge workers from the skilled class to the working class. Apparently, important people believe that this is a long-term sustainable narrative.


The skilled class is the working class and always has been. The delusion that software developers were ever outside the working class because they were paid well is just that - an arrogant delusion.


Engineering is always boom/bust. Ask a retired aerospace engineer who got purged in the 90s.

Technology always automates jobs away. I had a dedicated database systems team 25 years ago that was larger than an infrastructure team managing 1000x more stuff today. Dev teams are bloated in most places, today.


Well, for the time SEs are substantially better paid than working-class jobs, they are not the working class. For now, this applies at least to some regions, not only within the US. I agree in that I have at times felt some level of arrogance among some people taking up software engineering jobs, but IMO this just confirms the social class aspect of it. So there may have been some level of delusion to it, but at least temporarily it was, and partially still is, true.


Working class is not defined by income level.

The working class is those who own no significant means of production and thus must sell their labor at whatever price the market bears.

That the market for SE labor is good(for the workers), doesn't mean SE's don't need to work to earn money.


This is an interesting perspective, and I assume your definition is the technically correct one. Still, many SEs receive substantial compensation in RSUs, direct stocks, shares in startups, et cetera. So also from this perspective, there are many non-working class SEs. Another aspect is that culturally, the perception has been that SEs don't necessarily sell their work by the hour, but instead sell knowledge that scales tremendously, in exchange for a comfortable upper middle-to-lower upper class life.


To be technical, and to borrow a bit. Proletariat[1] are the working class, they work for the Bourgeoisie[2], the people who own the means of productions. That's why I asked why you used demote. Lower, Middle and Upper are strata or ranks within classes. Within the bourgeoisie, you can distinguish:

Petite bourgeoisie: small business owners, shopkeepers

Haute bourgeoisie: industrialists, financiers

Managerial class (in some frameworks): high-paid non owners who control labor

Within the proletariat, you can distinguish:

Lumpenproletariat[3]: unemployed, precarious

Skilled laborers vs unskilled laborers

Labor aristocracy: better-paid, sometimes ideologically closer to capital

https://en.wikipedia.org/wiki/Proletariat [1]

https://en.wikipedia.org/wiki/Bourgeoisie [2]

https://en.wikipedia.org/wiki/Lumpenproletariat [3]


> the Bourgeoisie[2], the people who own the means of productions

> Within the bourgeoisie, you can distinguish: [...] Managerial class [...] non owners who control labor

Contradiction?


Class != Job Title

Managerial Class != Bourgeoisie

This was a loose usage of the term “bourgeoisie”, meant in the sociological rather than economic sense. Sorry.

In late capitalism, the PMC (Professional Managerial Class) occupies a weird liminal space:

Economically they're proletariat

Socially/culturally they're aligned with bourgeois values

Politically they often acts in defense of capital (because of career dependency)

Hence: managerial class != bourgeoisie, even if they act like them or aspire to be them.


It's not technically correct, it's correct.

The distinction here is "do you get your money from owning assets or do you get your money from working" because where you get your money is where you get your incentives and the incentives of owning are opposite the incentives of working in many important regards.

The economy is inhabited by people who work for a living but it is controlled by people who own things for a living. That's not a conspiracy theory, it's the definition of capitalism. If you do not own things for a living and do not know people who do, spend some time pondering "the control plane." It should seem like an alien world at first, but it's an alien world with a wildly outsize impact on your life and it behooves you to understand it in broad strokes even if you aren't trying to climb into it.


I wouldn’t say the economy is “controlled” by those people. The economy is just an emergent phenomenon. It’s a natural result of unrestricted freedom of exchange.


> The economy is just an emergent phenomenon. It’s a natural result of unrestricted freedom of exchange.

I have a bridge to sell you if you're interested. Let me know.


Something I've always puzzled over is whether the means of production are our laptops, or our knowledge and expertise. I still work for a wage, but expect to be paid above subsistence. I don't own the laptop that I use at work, but also don't own the carpeting on the floor. Both are commodities.

Is the concept of "intellectual capital" a figment of my imagination, or a flaw of the traditional class identifiers? Or both?


For factory workers, the means of production is the factory.

What’s “the factory” for software? Our equivalent of the factory is the organization we work in, and our connection to the people who turn our software into money.

You can write software at home by yourself, just like you can do machining on your own. But there are a lot of steps between making software, or machining, and turning that output into money. By working for a company, you can have other people handle those steps. The tradeoff is that this structure is something owned by someone else.


The means of production are the software you produce, the servers they run on, and the patents, proprietary data, algorithms, and other intellectual property that are the byproduct of your labor.


This might be the definition in Marxist theory but in normal colloquial language “working class” absolutely does not mean the same thing as “anyone who doesn’t own the means of production”. But I think you know what OP meant and are just derailing the conversation.


> but in normal colloquial language

Therein lies the propaganda.


Where do people that works in Wall Street fall in this spectrum? They don't own the factories


The means of production for a software engineer is a laptop. Many SEs own them. There are no raw materials or factories needed to produce software, at least not in the sense of traditional production.


That’s not true. The actual means of production are the data centers. It’s true they didn’t use to be hugely expensive either, but now with AI being the backbone of everything we now have really expensive data centers again.


if we're just going to loop this properly, the modern means of production is the stock market's inflated capital. Most of AI floats on cash that does not exist for any purpose except market speculation.


Thats capital, not means of production.

The means of producing an AI is a huge data centre for training. Having a lot of money but no chips of any kind wouldn't get you an AI. We had money 10 years ago, but they did not make AIs out of them.


You could say the same thing of hands. What really distinguishes capital from labor is not what counts as a tool, but market power. A large number of non-unionized workers are inherently at a disadvantage against a small number of employers with exorbitantly costly infra.


But for the software developer, the tool is also the factory.


data, infrastructure, IP, etc You typically don't serve business from your laptop. Without that stuff your code is worthless.


The way a word is defined by communists and the way it is defined by the rest of the world are seldom the same.


The working class is globally the class of people who must sell their labour. That includes - to a rounding error - all software developers and that is completely uncontroversial.


This also includes doctors, lawyers, academics, managers and executives, none of whom are traditionally thought of as "working class"


That group is, in fact, traditionally considered largely working class (proletarian, more specifically the proletarian intelligentsia, though some in that group might be middle class, again, in the traditional class analysis, petit bourgeois sense.)

American popular usage defers from traditional economic role-based class analysis to be instead do income-based “class” terminology which instead of defining the middle class as the petit bourgeois who apply their own labor to their own capital in production (or otherwise have a relation to the economy which depends on both applying labor and ownership of the non-financial means of production) defines middle class as the segment around the median income, almost entirely within the traditional working class.

This is a product of a deliberate effort to redefine terminology to impair working class solidarity, not some kind of accident.


Whose tradition? Not the American working class. Despite the strong labor unions extent I think you'd be hard pressed to find marxists among them. We talk of middle and upper class precisely because we don't ascribe to the "traditional" framing of bourgeoisie v. proletariat, because running a business is actually work too, even if you own the capital. If you sit around and spend money all day we just call you an aristocrat.


It doesn't, because a lot of those people do not sell their labor. Doctors sell a practice, or can anyway. As time goes on fewer and fewer do - they're being pushed out of the capitalist class to the working class. Most now work a salary for a large employer, like you or me.


All professions are traditionally working class.


I guess it depends on whose tradition is under discussion. In the contemporary American usage, "working class" means the trades, or factory and service work. Few people would call a physician or lawyer "working class" even though they are paid for their time (and knowledge).


I wonder about your contemporaries. I imagine that most of them have a completely different definition to you, because you and doctors and lawyers are - to a rounding error - working class and everyone but you is aware of it.


As someone who used to be in the actual working class (plastic factory), it's not the same at all. Professional-Managerial Class (PMC) covers the autonomy and good treatment the working class can't have. Plus just talk to them.

https://en.wikipedia.org/wiki/Professional%E2%80%93manageria...


Marxism is the most impactful ideology of the history of the 20th century and its vocabulary permeates all of political and economic analysis. Marxist analysis is not the same as communism.


>Marxism is the most impactful ideology of the history of the 20th century

I agree. Let's hope it will have much less impact on the 21th century.


Are you ignoring the rest of the comment?


Most professional economists IMHO would not agree that Marxism's vocabulary permeates their field.

Core economic concepts are things like elasticity of demand, market equilibrium, externality, market failure, network effect, opportunity cost and comparative advantage, and AFAIK Marx and his follower had essentially no role in explaining or introducing any of those.


I think if you had read Capital, you'd find many of these concepts amply addressed, usually with different terminology.


Please list terms in this different terminology that are equivalents or analogs of the terms I listed, so that I can use Ctrl+F to find them in my PDF of volume one of the book.


Here's deepseek's answer. To Deepseek I add: Market failure is addressed even more in Lenin's "Imperialism: The Highest Stage of Capitalism" which addresses market and financial consolidation in the early 20th century (it's worse now). I would also add that Marx built off of and sometimes critiqued Adam Smith and Ricardo, it's not an entirely different branch of the intellectual tree.

    Elasticity of Demand – Marx does not explicitly discuss elasticity, but he analyzes demand fluctuations in terms of "the social need" (gesellschaftliche Bedürfnis) and "effective demand" under capitalism (Capital, Vol. III). He notes how capitalist production responds to demand shifts, though not in the formalized neoclassical sense.

    Market Equilibrium – Marx critiques the idea of equilibrium (a key concept in classical and neoclassical economics), instead emphasizing "anarchy of production" and "tendential laws" (e.g., the tendency of the rate of profit to fall). He sees markets as inherently unstable due to contradictions in capitalism (Capital, Vol. I & III).

    Externality – While Marx doesn’t use this term, he discusses "social costs of production" (e.g., environmental degradation, worker exploitation) as inherent to capitalism’s drive for profit (Capital, Vol. I). His concept of "metabolic rift" (in Capital, Vol. III and his ecological writings) touches on unintended consequences akin to negative externalities.

    Market Failure – Marx’s entire critique of capitalism can be seen as an analysis of systemic "failures," such as "crises of overproduction", "underconsumption", and "disproportionality" between sectors (Capital, Vol. II & III). He attributes these to contradictions in the capitalist mode of production rather than isolated market inefficiencies.

    Network Effect – Marx does not discuss this directly, but his analysis of "general social labor" (the socially necessary labor time underpinning exchange) and the role of "commodity fetishism" (Capital, Vol. I) implies that value is socially determined in a way that could loosely parallel network effects (e.g., the more a commodity is exchanged, the more its value appears natural).

    Opportunity Cost – Marx does not use this term (rooted in marginalist economics), but his labor theory of value centers on "socially necessary labor time", implying that the cost of producing one good is the labor diverted from other uses (Capital, Vol. I). His concept of "alternative employments of capital" in Capital, Vol. III also touches on trade-offs.

    Comparative Advantage – Marx critiques David Ricardo’s theory of comparative advantage (e.g., in Theories of Surplus Value), arguing that international trade under capitalism exploits unequal exchange and reinforces imperialism. He focuses on "uneven development" and "super-exploitation" rather than mutual gains from trade.


Marx permeates economics like Newton permeates physics.

If this seems like an absurd comparison, I would suggest reading both Philosophiæ Naturalis Principia Mathematica and Das Kapital.


The obvious and actual analog to Newton is Adam Smith.


Marx builds on Adam Smith and Ricardo among others and contributes an understanding of where money comes from and where profits come from among other things.


Yeah, one is right and the other is bs pushed to divide people


Is the wealth of the average software developer ($122 000/y) in the US closer to the wealth of:

A) a coal miner with $60 000/y salary

B) Elon Musk: $381 000 000 000

Sources: - https://www.indeed.com/career/software-engineer/salaries

- https://www.glassdoor.com/Salaries/coal-miner-salary-SRCH_KO...

- https://finance.yahoo.com/news/elon-musk-rich-6-8-170106956....

Is the average amount of properties (1-2) owned by a software developer closer to those of:

A) a worker at Walmart

B) Mark Zuckerberg?

> Well, for the time SEs are substantially better paid than working-class jobs, they are not the working class.

That's what they have been telling SEs to prevent us from unionizing :) All so they can put you where you stand now, when they (wrongly) think they don't need you. SE jobs are working class jobs, and have always been.


This feels like the wrong question to ask. Someone with a net worth of a "mere" $2 million is also closer to the coal miner, but at a 3% per year withdrawal rate, has a passive income equal to the coal miner's full time work week without lifting a finger.

I don't think it makes sense to group the "don't have to go to work anymore" people with the "can buy anything" people, but they don't have a lot in common with the working class, either.

To what extent are SWEs working class? I guess that depends on how many of them still have to go to work. A salary of $350k certainly puts you on the road to never having to work again.


> Is the wealth

You use that word, it does not mean what you think it means when you immediately talk about income.


Let me ask you this, do software engineers in unions enjoy more benefits and compensation than those not in Unions?


A teacher out of union is still as much working class as is a teacher in a union. We software engineers are working class because we have to work for our money.

Do you own businesses, land, investments, and other forms of capital that generate wealth independently of direct labor? Enough wealth so you don't have to work for the foreseeable future? Is this the average software engineer for you (in or out of a union)? Because that's the definition of NOT being a part of the working class.


They enjoy more benefits and compensation than if they were not in a union. Most importantly more protections.

Comparing those in unions who are more likely to be in the video game making or industry or government to faangs like Amazon where you work day and night for a 4 year vesting offer that pays out very little until the 4th year and where on average most worker work less than 2 years at Amazon.


>They enjoy more benefits and compensation than if they were not in a union.

Please tell me a union SWE shop that has better benefits and comp than I get?


I don't think that way of defining the working class is very sound. Everyone expect ~50 people would be working class, probably including Taylor Swift and Donald Trump.

Also "working class" has a historical, social component, by which programmers are certainly not included.


Normalize it to a logarithmic scale, and the SWE is still quite obviously a wagie. But the gross and unconscionable concentration of power in a small handful of unelected oligarchs is not the relevant distinction here.

When ownership of things can keep you and your family fed, clothed, and sheltered in comfort, you're part of the owning class. If it can't, you're a worker. Maybe a skilled worker, maybe a highly paid worker, maybe a worker that owns a lot of expensive 'tools' or credentials, or licenses, or a company truck, or a trillion worthless diluted startup shares that have an EV of ~$50, but you're still a worker.

If you're the owner of a small owner-operated business, and the business will go kaput because you didn't show up to do work, you're also a worker. The line is drawn at the point where most of your contribution to it is your own (or other peoples') capital, not your own two-hands labour.

Now, if you're some middle manager, with no meaningful ownership stake - you are still a worker. You still need to go to work to get your daily bread. It just so happens that your job is imposing the will of the owners on workers underneath you.


Yea for concrete numbers:

If you have somewhere between $5M and $10M in a HCoL American city, you are probably no longer working class insofar as you could quit, get on ACA healthcare, and rent a decent house or buy / mortgage a decent house and live a pretty comfortable life indefinitely. But you're on the very low end of not-working-class and are living a modest life (if you quit and stop drawing a salary).

If you have under that threshold (in a big expensive US city), you are probably still working class.

A lot of software engineers can get to $5M-$10M range in like 10-30 years depending on pay and savings rate. But also a lot of software engineers operate their budgets almost paycheck-to-paycheck, and will never get there.


> A lot of software engineers can get to $5M-$10M range in like 10-30

$5-$10M for 30 years, but only if you save every penny in between? Wow, that's very impressive and totally life-changing! Reminds me of the story how millennials are not able to afford buying a house because of avocado toast!


I don’t see how something like 160-320k income without working is a “modest life”. By any absolute standard you have it better than almost every human that has ever lived.


The caveat is stated above: in a large expensive US city where a lot of these high paying tech jobs are.

Over 50% of that $160k floor is eaten up by just housing and private or ACA insurance.

So your housing costs for like a 1k-2k sqft spot, all in (rent, or if owning then insurance, upkeep, etc) costs you something like $50k+, your health insurance for two people on ACA costs you like $40k yr assuming kids are out of the picture (more if not), and you have a decent chunk leftover to spend on living a decent life, but not like egregiously large amounts. You're not flying first class, probably not taking more than 2 big vacations a year, driving nice but not crazy expensive car, etc.

If you elect to leave the big expensive US city, then of course you can do it with substantially lower amounts (especially so long as you can swing ACA subsidies and are willing to risk your "not-working-class" existence on the whims of the govt continuing that program).

Obviously if you live in some place (read: everywhere except the US?) where the floor for medical costs of two people not working but still having income from capital isn't around $40k/yr, then the amount can go wayyyy down.


On the contrary, the definition of "working class" has basically included everyone of up (and potentially including) the petit bourgeois.


> I don't think that way of defining the working class is very sound

Oh, really? Is that why both "white-collar worker" and "blue-collar worker" contain the word "worker"? Working class is everyone who has to work for their money. Can most programmers, on a whim, afford to never work again? An average programmer's salary is 2x the average coal miner's. A CEO is nowadays paid 339 the salary of their average worker https://www.epi.org/publication/ceo-pay-in-2023/.

Programmers are just one prolonged sickness or medical debt away from being destitute, the same as every other member of the working class. Lawyers, teachers, doctors, programmers, those are all working class, along with agriculture, mining, utilities and all people who have to get up and work for their daily bread and a roof over their head. Sure, there is a discrepancy in pay, but it's not as glaring as it is between a worker and the oligarchs like Trump and Elon Musk. The biggest con in society is that you are so far distanced from the obscene wealth of the rich, that it's not in your face to see how little you have and how much they do.

Both the guy in an old Dodge and the guy in the new Tesla are stuck in traffic, and you fail to realize realize there are people out there right now flying on a private jet for a cocktail? You think the guy living in an apartment is so much different than a guy living in a house in suburbia? How about the guy whose real-estate company bought the whole development and now is cranking up prices?

You make $200k yearly as a welder? Still working class.

You own a small business with 10 workers working for you? Still working class.

You manage a team of devs in a FAANG and are doing alright for yourself? Still working class.

Your parents donated a wing to Yale and own a hotel chain? Not working class.

Your savings account and stocks generate enough for you that you never have to work again? You are not working class.

This is because wealth wise, you are still closer to how much an unemployed person on benefits than to a CEO of a multinational company, and that's a fact.


Weak argument.

The objective level of reproduction of labor force is about $2 per day. Cheaper for warm climates, slightly more expensive for cold ones.

So by that logic there is no working class in the US whatsoever because you don't have to work to survive. At all. Maybe half a year in your entire lifetime.

You just choose to spend all your money on things you don't need to survive, that's the only reason you needed to work. But that doesn't make you a worker class any more than Elon Musk becomes a worker class by buying 10 companies like Twitter.

So, using your logic, "You are making more than 50 cents an hour? You're not working class. You don't have to work most of your life to survive yourself or to provide for your children. You're closer to Elon Musk than to workers forced to work for $2 a day to survive."


> The objective level of reproduction of labor force is about $2 per day. Cheaper for warm climates, slightly more expensive for cold ones.

I also like making random numbers up. Here are other numbers. 420. 1337. 1911.

> So by that logic there is no working class in the US whatsoever because you don't have to work to survive. At all. Maybe half a year in your entire lifetime.

I have no words to express how weak this argument is. The US has MOSTLY working class people because less and less people can survive on their salaries.

https://www.cbsnews.com/news/cost-of-living-income-quality-o...

> So, using your logic, "You are making more than 50 cents an hour? You're not working class. You don't have to work most of your life to survive yourself or to provide for your children. You're closer to Elon Musk than to workers forced to work for $2 a day to survive."

I am not sure if this sentence is a troll, or comes from genuine misunderstanding. I don't know what to advice. I genuinely chuckled. Here are a three numbers, elementary math:

7.25

53

1 600 000

Which two of these numbers are closer to one another? 7.25 and 53, right (I hope)? Well, let's look what those numbers mean:

7.25 - minimum wage https://www.dol.gov/general/topic/wages/minimumwage

53 - average hourly salary of a software engineer: https://www.indeed.com/career/software-engineer/salaries

1 600 000 average hourly wage of Elon Musk: https://moneyzine.com/personal-finance/how-much-does-elon-mu...

So...who is a software engineer closer in terms of income to? Elon Musk or a minimum wage worker?


Non-founder (i.e. external hire) CEOs and other corporate executives also have to work for their money, therefore they are working class. The definition may be technically correct (the best kind of correct) but it is useless.

("A CEO is nowadays paid 339 the salary of their average worker" you say? If we are nitpicking, that's obviously false; only a tiny, tiny fraction of CEOs are paid that well.)

Aside from that, I'd wager a rather large fraction of HN can easily afford never to work again. This place is crawling with millionaires and they're definitely not embarrassed about it, temporarily or otherwise. Good luck convincing them.


> A CEO is nowadays paid 339 the salary of their average worker" you say? If we are nitpicking, that's obviously false; only a tiny, tiny fraction of CEOs are paid that well.

We are nitpicking, and you are wrong:

https://therealnews.com/average-ceo-makes-339-times-more-tha...

https://www.epi.org/publication/ceo-pay-in-2023/

https://www.statista.com/statistics/261463/ceo-to-worker-com...

"In 2022, it was estimated that the CEO-to-worker compensation ratio was 344.3 in the United States. This indicates that, on average, CEOs received more than 344 times the annual average salary of production and nonsupervisory workers in the key industry of their firm."

> Aside from that, I'd wager a rather large fraction of HN can easily afford never to work again. This place is crawling with millionaires and they're definitely not embarrassed about it, temporarily or otherwise. Good luck convincing them.

You can wager whatever you want, but statistically you'd be wrong.

https://www.bbc.com/worklife/article/20240404-global-retirem...

https://www.cbsnews.com/news/retirement-crisis-savings-short...


Dude, your own link https://www.epi.org/publication/ceo-pay-in-2023/ says "We focus on the average compensation of CEOs at the 350 largest publicly owned U.S. firms (firms that sell stock on the open market) by revenue."

These 350 largest publicly owned US firms employ a major portion of the US population. Just the top 10 companies by revenue employ 6% of the US work-eligible population. Imagine the total 350 companies!

So, while 350 seems to be a small number, these 350 companies employ the largest chunk of the US market, and that's why they are the most representative to do the study with.

And that's my point! If you select a random worker in the US, there is a HUGE chance they are employed by Amazon, Walmert or one of the big-s. And there is a HUGE chance their salary is 339 times less than their CEO's.


"One medical debt away from being destitute" is socialists trying to make common cause between the upper and lower class. We have great insurance and big piles of savings, there's nothing in common with people who can barely afford their deductible and lose their job for missing too much work.

The boundary between working class and not working class is not at the 99th percentile where you would have it. The diminishing marginal utility of money means you get 90% of the security of being wealthy at 0.1% of the net worth of a billionaire.


Great insurance coming from where exactly?


> We have great insurance and big piles of savings, there's nothing in common with people who can barely afford their deductible and lose their job for missing too much work.

Bullshit.

https://edition.cnn.com/2024/07/23/business/inflation-cost-o...

https://ssti.org/blog/large-percentage-americans-report-they...

https://www.cnbc.com/2023/01/18/amid-inflation-more-middle-c...

Between 30 and 70% of Americans can't make ends meet. What used to be called "middle class" is disappearing, making way to only 1%-ers and us, the rest. The fact that I drive a Tesla and some guy drives a Dodge, doesn't mean we are not both stuck in traffic while there is some shcmuck flying on their private jet to reach their yacht.


I think you replied to the wrong person?

And for all of Mark Zuckerberg's wealth and property holdings, it wasn't enough and didn't stop him from trying to take sacred land from native Hawaiians on the island of Kauai.


Yeah, I don't see any real difference between "Shut up and take it or we'll outsource your job overseas," which they've been saying since the 90s, and "Shut up and take it or we'll replace you with AI." Same threat, same motives.


s/post-pandemic/after ignoring the pandemic/


The pandemic is ongoing.


For anyone downvoting this: look into how many people are currently infected. Compare that to early 2020.

Yes, the people who were prone to dying already did so years ago. But the rate of long term disability in every single country is skyrocketing.

The average person has had 4.7 covid infections by now. Now look into the literal thousands of studies of long term effects of that.

Future generations will never forgive us for throwing them under the bus.


And yet, we go on living. Stopping the pandemic is impossible, so what else can we do?


Why are you using the word "demoting"?


Management has different layers with different goals. A middle manager and a director certainly care a lot about accomplishing short term goals and are ok with tech debt to meet the goals.


Caring is part of it. Having good measures is another. Older measures that worked might need updating to reflect the new, higher spaghetti risk. I expect Amazon to figure it out but I don't see why they necessarily already would have.


So it does make sense?


And less skilled.

I totally get that AI can be a huge boost for shitty code monkeys.


> shitty code monkeys

Please don't put others down like that on HN. It's mean and degrades the community.

https://news.ycombinator.com/newsguidelines.html


I see it more as replacing shitty code monkeys because it leaves the hard parts behind.


But you of course with your superior skills are above that risk?


No. The actual competency of AI won't matter. Lots of corporate executives will prioritize short-term cost savings, with little concern for the degradation for essential systems. They will hop from company to company, personally reaping the benefits while undermining the systems that users and society rely on. That's part of the reason why the current hype is blown way out of proportion by these people. Because who has faced consequences for behaving this way?


[flagged]


Is it my fault that you didn't have the patience to learn your place in capitalist system is the same as the rest of the working class and now you have to face those consequences, but without the support of your peers as you have acted as a class traitor.

Your hubris blinds you to the reality of your situation.


Bullshit, you're buying into the hype and looking like a fool.


This is interesting:

> “It’s more fun to write code than to read code,” said Simon Willison, an A.I. fan who is a longtime programmer and blogger, channeling the objections of other programmers. “If you’re told you have to do a code review, it’s never a fun part of the job. When you’re working with these tools, it’s most of the job.”

> This shift from writing to reading code can make engineers feel as if they are bystanders in their own jobs. The Amazon engineers said that managers have encouraged them to use A.I. to help write one-page memos proposing a solution to a software problem and that the artificial intelligence can now generate a rough draft from scattered thoughts.

> They also use A.I. to test the software features they build, a tedious job that nonetheless has forced them to think deeply about their coding.


I was just thinking about this the other day (after spending an extended session talking to an LLM about bugs in its code), and I realized that when I was just starting out, I enjoyed writing code, but now the fun part is actually fixing bugs.

Maybe I'm weird, but chasing down bugs is like solving a puzzle. Writing green-field code is maybe a little bit enjoyable, but especially in areas I know well, it's mostly boring now. I'd rather do just about anything than write another iteration of a web form or connect some javascript widget to some other javascript widget in the framework flavor of the week. To some extent, then, working with LLMs has restored some of the fun of coding because it takes care of the tedious part, and I get to solve the interesting problems.


I'm with you. I love solving puzzles to make something go. In the past that involved writing code, but it's not the code writing that I love, it's the problem solving and building. And I get plenty of that now, in a post-LLM world.


I spend all day fixing bugs and that's why they pay me -- because for most people it's not an enjoyable task. I'm not denying your experience but I will tell you I think you're an outlier. For most people, fixing bugs they didn't create is the worst part of the job.


Part of the fun is also figuring out the “best” way to achieve a thing. LLMs don’t often propose the best way and will happily propose convoluted ways. Clean approaches are still hard to come up with, but LLMs certainly help implementing them once thought up


Yeah, I agree with that. I still enjoy doing the conceptual and design work that once upon a time would have been the role of an "architect", and that's still required with LLMs.


I think for me, the fun comes from preventing bugs; being able to draw on my experience to foresee common classes of bugs, and being able to prevent them through smart code architecture, making it easier for future contributors/readers to avoid walking into those traps. I'm hoping I'll keep being able to do that.


Code reviews are fun when you can trust the competence of the coder.


Maybe we have changed or the industry changed, but last 5 years I’ve rarely met anyone who took the code reviews seriously. Any feedback, or glaring issues, are swept under “we will address this later”, “we work in MVP way(read:deploy barely working code without any accountability to production”, or just outright take it as a character challenge. I’ve lead a team of junior-level engineers who would fight with you via the product owner/scrum master because of the “priorities” and “deadlines”. And then, you’d be on the call solving the same issues you’ve raised in pull request earlier. Maybe I’m getting old, but, taking accountability or pride in one’s work, is getting only worse with Vibe coding and ai generated garbage


Well, from personal experience, code reviews at the FAANG I worked at were taken quite seriously. (But we were building OS frameworks so it mattered more.)


I find both fun. Writing a web form is still kinda fun even after 20 years. I guess it’s like how some people still play the same video games after years and years while others want something new.


Fixing bugs in my own code is fine. In other's code its less fun.


Why?


That’s the wrong mentality. You and your team own all the code. A bugs peer is your bug too.


Not OP but I’ll give my perspective. I have no problem fixing someone else’s bugs up to a point, but I also don’t want to be the one cleaning up someone else’s mess who didn’t take time to do proper testing but is getting rated higher than me for clearing more tasks last month because they cut corners


Bugs are awesome. I picked my current job at Google because it involved lots of bug fixing and crash investigation.


I hate fixing bugs but I enjoy big refactoring, AI-assisted or not - sometimes you just gotta do it all


100% agreed.

I solve a problem, let the AI mull on the next bit, solve another problem etc.


Illiterates. Reading code was always very important. Hell may be mandatory vibe coding, but I'm glad this guy is now forced to read.


As far as vibes go, these guys sound much more negative.


Same, I love fixing bugs


> Andy Jassy, the chief executive, wrote that generative A.I. was yielding big returns for companies that use it for “productivity and cost avoidance.” He said working faster was essential because competitors would gain ground if Amazon doesn’t give customers what they want “as quickly as possible” and cited coding as an activity where A.I. would “change the norms.”

Like what exactly is Amazon giving us here? I don't get it. Also, I want to see Andy Jassy start writing some codes or fixing issues/bugs for next 5-10 years and have those reviewed by anonymous engineers before I take any word from him. These marketers/sales sleezy dudes claim garbage about things they do not do or know how to do but media picks up everything they say. It is like my grandmother who never went to school starts telling me about how brain surgery is slow and needs productivity else more people will die and those doctors need to adapt. Shameless behavior of these marketing/sales idiots as well as the dark side of media has reached new extreme in this AI bubble.

Meanwhile, I can see from comments how a lot of HNers totally agree everything this salesy guy says as holy bible verse and my colleague is sending me freaked out texts about how he is planning to switch career as Amazon Super Boss is talking about vibe coding now but became calm after I told him, these dudes are mostly sales/MBA who never wrote code of fixed issues, same way our PO doesn't know the diff between var and const.


Jassy is one of the most anti-employee CEOs I've ever seen. Which is remarkable, considering who he is the successor of.


Why is this surprising? Isn't being anti-employee and being a psycho the most basic requirement for being a CEO of huge companies these days?


Think about who his predecessor was.


You can speed up software development dramatically by simply copying the code that someone else was foolish/idealistic enough to store on GitHub, stripping out the copyright and licensing notices, and adapting it to your needs

The problem with that is that you don't want to get caught doing that directly.

So you need to hire a sleazy offshore firm to launder that for you.

Or (faster and cheaper) use a sleazy "AI" to launder it.

Google had too much class (and comfortable market position) to launch the sleazy "AI" gold rush.

But plenty of upstarts currently see an irresistible opportunity to get rich, by leveraging automated mass copyright violation, while handwaving "AI".


I'm curious what industry you work in. Rarely have I run into a large problem that was tempting to solve with Stackoverflow/Github-driven development. I find the limiting factor is never "we don't have the code" and more "we don't know what code we need".


I certainly agree that there are many important aspects to software development, other than just code.

But I don't want to confuse the point, and accidentally be thinking "plagiarism of code isn't important, because code isn't important" (or the limiting factor).

For one reason, code is important, and valuable...

There's plenty of evidence throughout many types of software development to suggest that a lot of money has to be plowed into simply producing bulk of code -- excluding domain or market understanding, requirements analysis, holistic system design, etc.

A typical growth startup wasn't in a hiring frenzy for great analytical minds, but simply to scale up production of raw bulk of code (and connecting together legally-obtained off-the-shelf pieces). And they were often tracking quantitative metrics on that bulk, and consciously aware of how much all those code-monkey salaries were costing on the balance sheet. They paid it because code bulk was very necessary, the way they were working.

There's also plenty of evidence of the popularity of copy&paste reuse for many years. Including from StackOverflow.

Copying entire projects and subsystems has been much-much less common, partly due to the aforementioned not wanting to get caught doing it. But "AI" laundering is paving the way. See all the blog posts and videos about "I created an app/game/site in an hour with AI".

We can also look at startup schools of thought, where execution is widely regarded to be everything (ideas are a dime a dozen). There is plenty of thinking that churning out code is one of the most time-consuming or frequently-blocking parts of execution.

In both startups and established corporate environments, there are normally big lags between when a need for specific code is identified, and when that code can be delivered. Doesn't that look like a limiting factor, or at least it must be important.

I think that's enough reason that we not dismiss the value of code, nor dismiss the importance of plagiarism of code.


> accidentally be thinking "plagiarism of code isn't important, because code isn't important"

Based on my life experience from the last 15 years, you should assume that any "open source" code you leave online is going to be plagiarized heavily. It's unfortunate, but the hard truth for a billion different reasons.


In general, the legal peril of contaminated copyrights made closed-source desktop coding less lucrative. Most if not all current desktop applications from major commercial vendors is at least in part server hosted, or online subscription based.

It is simply a shift from shovel-ware to a service model, and does what DRM did for the music businesses.

Personally, I often release under Apache license as both a gift and a curse... Given it is "free" for everyone including the competition, and the net gain for bandits is negative three times the time cost (AI obfuscation would also cost resources.)

The balance I found was not solving corporate "hard" problems for free, and focusing on features that often coincidentally overlap with other community members. Thus, our goals just happen to align, and I am glad if people can make money recycling the work. =3

Why bandits are not as relevant as people assume:

"The Basic Laws of Human Stupidity" (Carlo M. Cipolla)

https://harmful.cat-v.org/people/basic-laws-of-human-stupidi...


That’s awesome that you were able to make it work. But not everyone has the ability, either logistically or financially (or both), to enforce copyrights on open source code. So we should assume there will be some big bad actors who will absolutely take advantage of that.

Indeed, but the Apache license is quite different from BSD or GPL v3.

In my opinion, less restrictions on all users naturally fragments any attempt to lock-down generic features in the ecosystem.

Submarine Patent attacks using FOSS code as a delivery mechanism is far more of a problem these days, as users often can't purchase rights even if trying to observe legal requirements. The first-to-file laws also mean published works may be captured even if already in the public domain.

It is a common error to think commercial entities are somehow different from the university legal outreach departments. Yet they often offer the same legal peril, and unfeasible economic barriers to deployment.

Best regards, =3

https://www.youtube.com/watch?v=i8ju_10NkGY


Thanks, it's an interesting perspective. I may have underappreciated the code volume/time requirements to execute especially in a startup setting.

I think I see where you're coming from. I'd imagine in some applications, there is very niche/specific implementation on Github that is both expensive to reimplement and not permissively licensed. I can see the "AI laundering" angle there.


Google had too much class? They've been scraping sites content for years before all this LLM stuff - using it to answer questions directly in search results and sending no clicks

No, I think Google was just slow this time


I think Google has ethics in their self-image. Albeit more flexibility in practice.

But decisions might've come down to what keeps the ad revenue machine fed. Or it might not have made it to decisions.

If they were slow, wasn't it only in being prepared for when the boat got rocked? Until the boat got rocked, they had a monopoly, so why rock it themselves?

I wouldn't write in my promotion packet, "Disrupted our golden cash cow, and put our dominance and entire company at risk, before someone else did, because it would happen eventually, you'll thank me later, pls promo kthx."


> that someone else was foolish/idealistic enough to store on GitHub

That hits hard from my youth (and I'm only in my early 30s). Can we all please actively communicate to people that open source is not a business model, and any large firms promoting it are likely just looking for free work? Like communicate that in an empowering way?

Yeah, it would probably decimate open source contributions in the short term. But honestly, our field would be so much healthier overall in the long term.


I argued this point in the early 2000s when Richard Stallman was seen as a god in the software community and tried to convince everyone that this would be the new business model for software.

The same could be said about copyrights/patents (another point argued over the years). If you get rid of these protections for everyone, the only people that will end up on top are the large companies with lots of resources that can just take your ideas and not compensate you.


>> The same could be said about copyrights/patents (another point argued over the years). If you get rid of these protections for everyone, the only people that will end up on top are the large companies with lots of resources that can just take your ideas and not compensate you.

But isn't that exactly what we have now with the existing patents and copyrights implementation?

Maybe some people are getting some money, but I know if Amazon (random example company name) violate copyright or a patent of mine, I don't have the resources to take them to court over it much less win.


Sure, but if you have a strong copyright infringement case, couldn't you team up with a litigating company and take on Amazon? I am not sure that in any industry any tiny mom and pup shop can take on the largest corporate player without any cooperation from complementary interests...


In my experience, most law firms won’t go near big tech because of their litigious nature. Just the sheer amount of time and filings by big tech, legal or not, is enough to totally overwhelm an inexperienced legal team. It’s like a denial of service attack but through the court system.

The firms that will fight them are few and far between, and are priced accordingly.


But this is exactly why Stallman promoted not just Free Software, but specifically Copyleft software (via the GPL and then the Affero GPL).


Copyleft still removes the most common and available means for small inventors or businesses to get back *something* for their work.

Stallman has honestly done a ton of harm to our field. His “business model” reeks of ivory tower privilege and has ripped a ton of wealth and, more importantly, self-confidence, out of developers worldwide for at least 15-20 years now.

We all need to wake up to this.


>You can speed up software development dramatically by simply copying the code that someone else was foolish/idealistic enough to store on GitHub, stripping out the copyright and licensing notices, and adapting it to your needs

That's not what people who are using AI for though? The typical use case is "write a program that does [insert business problem]". I highly doubt there's a codebase that solves that exact business problem.


Yep.

ChatGPT was hugely irresponsible in many ways and rests solely on the shoulders of Sam Altman, someone allegedly implicated in various schemes.


If that is the case, isn't the future more private code? The producers realize that public release is no longer in their favor and will keep it in their groups (until they leak or whatever)?


Ironically, I've heard a few people saying they wish the big companies would just copy their publicly available code on GitHub even without attribution instead of coming up with their own overengineered and inefficient solutions which take a lot more time and effort, and which are then forced upon those former people as users.


Companies will always try to capture the productivity gains from a new tool or technique, and then quickly establish it as the new standard for everyone. This is frustrating and feels Sisyphean: it seems like you simply cannot get ahead.

The game is to learn new tools quickly and learn to use them better than most of your peers, then stay quietly a bit ahead. But know you have to keep doing this forever. Or to work for yourself or in an environment where you get the gains, not the employer. But "work for yourself" probably means direct competition with others who are just as expert as you with AI, so that's no panacea.


This can only go three ways.

The first is that the entire global codebase starts to become an unstable shitpile, and eventually critical infrastructure starts collapsing in a kind of self-inflicted Y2k event. Experienced developers will be rehired at astronomical rates to put everything back together, and then everyone will proceed more cautiously. (Perhaps.)

The second is that AI is just about good enough and things muddle along in a not-great-not-terrible way. Dev status and salaries drop slowly, profits increase, reliability and quality are both down, but not enough to cause serious problems.

The third is that the shitpile singularity is avoided because AI gets much better at coding much more quickly, and rapidly becomes smarter than human devs. It gets good enough to create smart specs with a better-than-human understanding of edge cases, strategy, etc, and also good enough to implement clean code from those specs.

If this happens development as we know it would end, because the concept of a codebase would become obsolete. The entire Internet would become dynamic and adaptive, with code being generated in real time as requirements and condition evolve.

I'm sure this will happen eventually, but current LLMs are hilariously short of it.

So for now there's a gap between what CEOs believe is happening - option 3. And what is really happening - option 1.

I think a shitpile singularity is quite likely within a couple of years. But if there's any sane management left it may just about be possible to steer into option 2.


I agree with you three scenarios. But I would assign different probabilities. I think the second option is the most likely. Things will get shittier and cheaper. Third option might not ever come to pass.

Just like clothing and textile work. They are getting cheaper and cheaper, true, but even with centuries of automation, they are still getting shittier in the process.


There are many more scenarios, though. One of them is that AI slop is impressive looking to outsiders, but can't produce anything great on itself, and, after the first wave of increased use based on faith, it just gets tossed in the pile of tools somewhere above UML and Web Services. Something that many people use because "it's the standard" but generally despise because it's crap.


The whole thing with gen AI is so depressing to me.

For the first time now I can feel the joy of what I do slipping away from me. I don't even mind my employer capturing more productivity, but I do mind if all the things I love about the job are done by robots instead.

Maybe I'm in the minority but I love writing code! I love writing tests! If I wanted to ask for this stuff to be done for me, I would be a manager!

Now, I'll need to use gen AI to replace the fun part of the job, or I'll be put out to pasture.

It's not a future I look forward to, even if I'm able to keep up and continue working in the industry.


The fun part for me was coming up with ideas for new things, then architecting those things, creating the high level systems to implement them, and iterating on them to make them better. Figuring out why some test harness wasn't mocking some random thing correctly, remembering which api call had which syntax, or just writing a bajillion almost-boilerplate endpoints was always drudgery and I'm glad to be rid of it.


Another game is to distribute the gains from increased productivity more equally. E.g. in Europe as late as early 2000s working hours were reduced in response to technological development. But since then the response even from workers seems to be to demand increasingly shittier bullshit jobs to keep people busy.


The game is live within your means and max out your retirement fund with index funds. Then you own a slither of that production.

This will work until the capitalists realize the stock market let's plebs do well and they'll unlist the best companies.


Bro lol. You were this close - you're channeling Marx (literally saying the same stuff he was) and instead of coming to the obvious conclusion (unions) you're like nah I'm just gonna alienate myself further. It's just amazing how thoroughly people have been brainwashed. I'm 100% sure nothing will ever improve.


> you're channeling Marx (literally saying the same stuff he was)

Marx is the originator of precisely none of those thoughts, you couldn't find an economist that disagrees with them. "Unions" is also not the obvious solution for the problems of an individual. Unless you have a specific, existing union with a contact phone number that you're referring to, one that has a track record of making sure that individuals are not affected negatively by technological progress over the span of their entire careers, you're just lazily talking shit.

If it's the solution, so much easier than keeping ahead of the technology treadmill, and it's so obvious to you, it's strange that you haven't set up the One Big Union yet and fixed all the problems.


> "Unions" is also not the obvious solution for the problems of an individual.

Right, but the observation here is that many, maybe most, individuals in a particular field are having this same problem of labor autonomy and exploitation. So... unions are pretty good for that.

SWE is somewhat unique in that, despite us being the lowest level assembly-line type worker in our field, we get paid somewhat well. Yes, we're code monkeys, but well-paid code monkeys. With a hint of delusions of grandeur.


I'm talking about labor theory of value vis-a-vis this comment

> Companies will always try to capture the productivity gains from a new tool or technique

Ie "capitalists" are not rewarded for deploying capital and mitigating risk but for extracting as much from the labor as possible. And yes Marx is absolutely the "originator" of these ideas and yes absolutely you ask any orthodox economist (and many random armchair economists on here) they will deny it till they're blue in the face. In fact you're doing it now :)

https://en.m.wikipedia.org/wiki/Labor_theory_of_value

Edit: it's the same thing that plagues the rest of American civil society: "voting against your [communal] interests because someone convinced you that your exceptional". Ie who needs unions when I'm a 10x innovator/developer. Well I guess enjoy your LLM overlords then Mr 10x <shrug>.


Gains from productivity will accrue to those with the most bargaining power. Whether that’s the employee or the employer is going to depend on the exact circumstances (realistically it will be some mix). Hence why factory workers today get paid more than in the 1800s (and factory owners as well!)


> Gains from productivity will accrue to those with the most bargaining power.

That's true. And employers have been consistently the one with more bargaining power, and that's why our wages haven't kept up with the productivity gains. This is also known as productivity-pay-gap.

We, the working class, are supposed to be paid roughly 50% more than we are paid now, if the gains from productivity were properly distributed. But they are not, concentrated to a large extent in the owning class, which is what's unfair and why we, the workers, should unite to get what's rightfully ours.

https://www.epi.org/productivity-pay-gap/


“I'm 100% sure nothing will ever improve.” Nothing? Ever? Brainwashed?


> Nothing? Ever? Brainwashed?

Interesting to see A imagine what B meant, then assert that A believes some metric will always go up because they always saw it go up? It's not clear what they meant, making this response as nonsensical as the response. An AI level exchange.


> It's not clear what they meant

I know reading skills are in short supply in a group of people that only read code but I thought it was pretty obvious what I was alluding to. But even if it weren't (admittedly you have to have actually read Marx for it to jump out at you) by the time you responded there was another comment that very clearly spells it out, complete with citations.


> I know reading skills are in short supply in a group of people that only read code but I thought it was pretty obvious what I was alluding to.

This kind of statement does not make a point, nor is it appealing to engage with. Good luck with whatever.


I love when people in glass houses throw stones;

> It's not clear what they meant, making this response as nonsensical as the response. An AI level exchange.

Does this kind of statement make a point? Is it appealing to engage with?

I saw this on Reddit and it captured this phenomenon beautifully: you're not a victim here, you're just starting a fight and then losing that fight.


right; the "ancap" mentality in computing could only last for so long. Eventually, and especially with the refusal of incorporating any ethics or humanity into it, it's now an established industry affecting all walks of life just like every other that has preceded it, and the belief that its technological superiority/uniqueness was a good reason to essentially exempt it from regulation (TV broadcasts for children are required to have "bumper" sections that would clearly define the show vs the advertisement; Why was computing/the internet treated differently? A high-horse mentality that stemmed from "complexity olympics"? no child could ever use or comprehend a sophisticated machine like this!!) has really fucked us. The labor is decentralized at such a scale that I also have a hard time believing anything could be rectified; open source software is mostly just corporate welfare, putting anything at all on the internet has become corporate welfare, and there is no real purpose or goal for building all of this. The computer was supposed to allow us to do less work, right?


> But know you have to keep doing this forever. Or to work for yourself or in an environment where you get the gains, not the employer

Or, you know, being a member of society, you can find other members of society who feel like you, and organize together to place demands on employers that...you know...stops them from exploiting you.

- That's how you got the weekend: https://www.bbc.com/worklife/article/20200117-the-modern-phe...

- And that's how you got the 8-hour working week: https://en.wikipedia.org/wiki/Eight-hour_day_movement

- And that's how you got children off the factories: https://en.wikipedia.org/wiki/Child_labour

But, you know, you can always hustle against your fellow SEs, and try to appease your masters. Where others work the bare minimum of 8 hours, why not work 12, and also on the weekend? It's also fine.

Generating shareholder value is very important for the well-being of society! /s


Not everything is about society.


As a member of society, a lot of things in your life should be about society. And none of those things is shareholder value.

In fact, most people would have a version of this pyramid in order of importance:

1. Personal mental and physical well-being and the same for your loved ones

2. Healthy and functioning society and robust social safety nets, e.g retirement, paid leave, social housing, public transport etc

...

1337. The composition of sand on Mars

...

...

...

...

4206919111337. Shareholder value


Who claimed that?


> Generating shareholder value is very important for the well-being of society!


Should I be worried about the shareholders? While we are it, how about also removing the few environmental regulations and worker protection laws we still have, just so the poor poor shareholders can buy another yacht? /s

"Stonks go up" is not a proxy for success. Success is when pharma executives don't tremble like the villains they are from hearing the name of Mario's little brother. Success is when normal people get from the social contract at least as much as they put in. If we, the people, get less than from the social contract that we put in, as we nowadays observe, I can guarantee you we will break down the social contract, and the ones having most to lose from that are your precious stakeholders.


By all means “organize together to place demands on your employers”. I didn’t say don’t do that. But there are 24 hours in a day — maybe strive to be good at your job AND organize instead of doing just one or the other?


I'd argue we'll be better at our jobs if we took pride in our craft and were treated with dignity and respect rather than like replaceable cogs in a machine that have to compete with one another to stay "competitive".


It amazes me how immature our field can be. Anyone that worked for big corporations and in humongous codebases know how 'generating new code' is a small part of the job.

AI blew up and suddenly I'm seeing seasoned people talking about KLOC like in the 90s.


> 'generating new code' is a small part of the job

I think this attitude has been taken too far to the point that people (especially senior+ engineers) end up spending massive amounts of time debating and aligning things that can just be done in far less time (especially with AI, but even without it). And these big companies need to change that if they want to get their productivity back. From the article:

> One engineer said that building a feature for the website used to take a few weeks; now it must frequently be done within a few days. He said this is possible only by using A.I. to help automate the coding and by cutting down on meetings with colleagues to solicit feedback and explore alternative ideas."


How is the shorter deadline better for the worker? Ultimately, that devolves to a race to the bottom with people choosing between overworking or being laid off. Surely AWS is profitable enough by now, that all those employees could get their work hours reduced, receive a raise, and have both the organization and the product keep existing just fine.


I could argue that a company that would do that could be in a long run out-competed by a company that would do layoffs and push their employees for higher productivity by utilizing AI.


> and by cutting down on meetings with colleagues to solicit feedback and explore alternative ideas.

At least something good comes out of this.


What used to take a week now can be done in just 5 days.


Yes, because from my observations these are the people who cannot write anymore code today - they really don't understand the newer paradigms or technologies. So Copilot enabled a lot of people to write POCs and ship them to production fast, without thinking too much about edge cases, HA, etc.

And these people have become advocates in their respective companies, so that everyone is actually following inaccurate claims about productivity improvements. These are the same people quoting Google's ceo who say that 30% of newly generated code at Google is written by AI, without possibility to deny or validate it. Just blindly quote a company that has a huge conflict of interests in this field and you'll look smarter than you are.

This is where we're at today. I understand these are great tools, but all I am seeing madness around. And who works with these tools on a daily basis knows it. Knows what it means, how they can be misleading, etc.

But hey, everyone says we must use them ...


Not that I disagree, but Anthropic put out some usage statistics for their products a few months ago and IIRC, something like 40% was used for software engineering.


Agreed. I spend maybe 20% of my time writing code. The rest is gathering requirements, design, testing, scheduling. Maybe if that 20% now takes half as long, I might have time to actually write some tests and documentation


> It amazes me how immature our field can be. Anyone that worked for big corporations and in humongous codebases know how 'generating new code' is a small part of the job.

Exactly, but I would go further, anyone who worked in big corps know that other non 'generating new code' part is usually pretty inefficient and I would argue AI is going to change that too. So there will be much less of that abstract yapping in endless meetings or there will be less people involved in that.


If that yapping went away, what are mid to senior managers going to do all day?


We don't have meaningful metrics that are objective. KLOC is bad but at least it means something concrete, issues closed or PRs submitted is essentially meaningless. This is a software engineering problem not an AI problem.


KLOC is as objective as your other examples and as meaningless as well IMO.


With "standard" formatting, and considering only legitimate source files, it at least means the same thing from one company or project to another. Sure, it can be gamed by writing bad, verbose code, but in the context of AI generated code, the distribution is squashed quite a bit, so it ends up being more meaningful. Perhaps when we have AI generated issues that will become a better metric as well.


As a software dev I always felt like working warehouse - sprints, strony points being held on productivity metrics.

people think all devs work FAANG like companies when there is loads of companies where devs are treated like dirt only now one FAANG company catches up with reality.

Where business people seem to never be measured in any way.

If requirements were shitty - well dev team did a bad job - not that business made up stupid decisions on stupider time lines.


I thought devs were always treated like dirt at Amazon ?

I've tried interviewing at that place, and have't felt that kind of hostility ever - one guy even wanted me to prove that K-means in 2d is in P. This I later found was a was the topic of a key paper in this niche of ML theory.


That can't be right - planar k-means is well known to be NP-complete.


Amazon is a well known sweatshop -- instead of producing clothes they produce code.


I think there is a fundamental misconception of the benefit / performance-improvement of LLM-aided programming:

Without sacrificing code quality, it only makes coding more productive _if you already know_ what you're doing.

This means that while it has a big potential for experienced programmers (making them push out more good code), you cannot replace them by an army of code monkeys with LLMs and expect good software.


I keep reading this but I feel it really ignores gaussian, where are your lines? What is good enough for what where? What is the base level of already know? I'm churning out a web app for fun right now with a couple of second year comp sci students from Sri Lanka + LLMs, they charge me around $1000 a month and my friend who is a SRE at appl looks at the code every week, said it's quality, modern and scalable. I do think they're a bit slow, but I'm not looking for fast.


You're paying $1000 a month to build a web app for fun?

This seems like a crazy solution to a situation.


...wait till you find out how much my one friend spends on golf a year!!!!! Hobbies are expensive. This will take about 3 maybe 4 months, and I think i'll enjoy playing it with and so will all my friends and family, so it's worth it I think.


What type of work does he do that allow him to spend on hobbies? (In other words, "What can I do to be wealthy like your friend?".)


Well… many hobbies are not that expensive. But sure, $1k/month for enjoyment is totally reasonable.


> you cannot replace them by an army of code monkeys with LLMs and expect good software.

"Good" software only matters in narrow use cases--look at how much money and how many contracts companies like Deloitte and Accenture make/have.

Sure, you can't "vibe" slop your way to a renderer for a AAA title, but the majority of F500s have no conception of quality and do not care nor know any "better."


An example from today: I have some scientific calculations that run in a Django app.

One specific endpoint has a function that takes 2-3 minutes to execute. When I was hitting the endpoint I was getting a timeout error. I asked Claude to fix it, and it came up with a huge change in the front end (polling) and backend (celery) to support asynchronous operations.

I instead increased the nginx timeout from 1 minute to 5 minutes. Problem solved without introducing 100x the complexity.


Sounds like Claude is doing the right thing, at least for the general problem.

You are just waiting for this to deteriorate, and at some point that 5min timeout is not long enough, and you'll have to come up with something else again.

And more generally, there are multiple issues with properly handling long running actions -- you want a good UX for these actions in your app, they may need to be queued or be allowed to cancel etc. I don't know the exact situation, but I assume this is a small app so you don't care much about it. But in any serious application where UX is important and you don't want someone to submit 10000 such requests (or 10000 users submitting at the same time) to blow up your backend, the sane design is to do this asynchronously with other mechanism to manage the actions.


> Sounds like Claude is doing the right thing, at least for the general problem.

No not really. The engineer’s job is work within the constraints first and when a big change is really warranted, pursue it.

It’s having a structural issue that can be fixed with a reinforcement, and an automated system suggests to demolish and rebuild part of the most structure altogether. It’s not that clear cut. You maybe right but I wouldn’t jump into conclusions like this.


You didn't specify how you asked Claude to fix it here so hard to evaluate if it's giving you what you asked for or just choosing the wrong approach. It's one of the things these models will do - they will narrow down on what you asked them to do - so for example if you just gave it your code without the deployment and said fix it on code level that would bias it to go in that direction, etc.

Disclaimer: not one of those "you didn't prompt right whenever LLMs fail" people, but just something I've noticed in my use of LLMs - they usually don't have the problem context so can't really tell you "should we even be going down this path"


I am pretty sure that if I hinted the direction of the solution the model would come up with the simple proxy config fix. I will try it for fun.

But I was trying to simulate the vibe coding experience where you are completely clueless and rely exclusively on the LLM. It works, but it creates even more work (we call it debt) for the future.

PS I was using Claude within Copilot, so it could definitely see the docker & nginx configuration.


Sounds like you asked Claude for a fix and it gave you a proper fix to your badly designed api endpoint. If it's an operation that's taking that long then yes, implementing it asynchronously is a good idea.

If what you wanted was a simple ductape quick fix I'm sure you could have asked for that and Claude would have recommended what you did, increasing the timeout window which I guess "fixes" the problem.


So would you revamp your entire stack to accommodate one single endpoint and solve a fictional problem?

What is the risk introduced by a long request that requires you to increase to your code complexity by 100x?

Sure I could also dump completely Django and build from scratch with the provision for 10B concurrent users.

My point is that when coding, business context is needed. I was expecting to see from the model what the root cause was and clear statement of the assumptions used when providing the solution. Instead, what I got was "this is a long request, here is the most likely solution for long requests, enjoy your new 1000 loc and dependencies"


I think you’ve nailed it: which fix is better depends on how bulletproof the solution needs to be. If a few people (not paying customers) call the api a few times a week and everyone’s on good WiFi or fiber, it’s probably fine. If you start hearing complaints from your users about failed requests, probably time to switch to polling.

I wonder if this is something you could get Claude to think about by adding some rules about your business context, how it should always prefer the simplest solution no matter what.


Nobody knows if it is a fictional problem. You didn't say what your CPU usage is like, how many users you have, what your UI looks like while your user is waiting. In any serious application, running it asynchronously is the correct solution 99 of 100 times. There is nothing wrong with updating your stack -- you have a business need, and you need to update your architecture, this happens all the time.

"Business context is needed." Then why don't you provide that context? You expect Claude to read your mind? What are we talking about here?

Dude, be humble. If all you want to do is to argue instead of having a productive discussion, this is not the place for you.


I've read previously that certain parts of the software industry at one point was all in on move fast and break things, but then after a while the quality of their code was so bad that they took a different course. Maybe some people didn't get their fill the first time round. I can see that LLMs are going to be quick, I can't see how their code bases are going to do anything but turn into a very bad state, probably much more so this time round.


“I instead put in a hacky solution because I’m not a programmer and didn’t understand the solution or why it might be better”


Explain to us non-programmers why a 60" request (the default nginx timeout) is ok, but a 61" request is not.


lol it’s funny you have to ask.

Out of curiosity I asked Claude.

“I have an application, it only has a few users, hosted on nginx. There is 1 endpoint which can take up to 5 minutes to execute. What’s the best way to fix this?”

Response:

“Immediate fix…

Increase nginx timeout settings explains how

I happen to have a php project here hosted on nginx with docker locally for running and testing. So I asked cursor the exact same question. It suggested refactoring to make the call async, or changing nginx. I said change nginx. It updated the config and rebuilt docker compose to make sure the change happened.

It’s always a user issue when it comes to using AI.


I feel my job in the future will be more secure than ever. Tons and tons of AI generated garbage code (trained on more and more existing garbage code) that the „developer“ will at a certain point no longer be able to maintain or fix. Not even speaking about trusting the output. Feels similar to all the outsourced development to cheap suppliers that inevitably collapse or create horrible maintenance overhead.


Do you feel you would be able to “maintain and fix” LLM-generated 100-megabyte source code blobs? And if you could, do you think it would be a job you’d want to do?


I spent 2000-2015ish as a Perl programmer, which meant I spent most of that time working for a small number of e-commerce and SaaS companies who had large codebases that had mostly been written around 2005 with some hair-raising tech practices. Nursing a nasty codebase to health bit by bit can be very rewarding.


As long as you don't have pressure to just ship X Y Z as you are doing it, in a team of 20 so any nursing you do is quickly undone by the team and its pressure to ship some half thought out feature.


> Perl programmer

Perl programmers are as always ahead of the curve. Writing code that looks like LLM slop before LLMs!


No, writing a replacement. Either part by part or a whole new software system.


Secure, sure. But this sounds like a crappy job.


Not so crappy if the pain is big enough for the customer. Once a certain threshold is reached they are often very open to changes and giving freedom to develop a proper solution.


About 8 or 9 years ago, I talked to a friend who started their first software engineering job. They talked about their job as taking tickets from the JIRA board, completing the change, and putting the tickets back. They were expected to complete 2-3 tickets per day. They didn't enjoy the job, needless to say.

I found this ultra-depressing, and far from what coding was for me - a creative role with great creativity and autonomy. Coding was always solving problems, and never felt like some sort of assembly line. But in a lot of companies, this is how it was constructed, with PMs setting up sprints and points, etc.

Similarly, I spoke to a doctor about how much they loved being able to work remotely at their role - with 2-3 days a week where they just responded to email and saw patients over telehealth. It felt very "ticket" focused and not at all the high status doctor role I imagined.

I suspect that both those roles will be lost to AI. If your role is taking a ticket that says "the box should be green, not red", and nothing more, that's the sort of thing that AI is very capable of.


> They were expected to complete 2-3 tickets per day.

Based on my experience with sprint teams, breaking things down into just a couple hours of work per ticket implies that someone else is doing an enormous amount of prep work to create a dozen tickets per feature. I agree that your friend is performing the work of a development system. I've heard this called "programming in Developer" as opposed to whatever language the developer is using.


I've had plenty of colleagues who expect the job to be that. Just working on a small ticket and moving it to "to test" when done.

It's incredibly frustrating to try and get anything done in a team like that. The reality of most software jobs I've had is that problem discovery is part of it. The only people who know the code well enough to know the problens it has are the developers.


> They talked about their job as taking tickets from the JIRA board, completing the change, and putting the tickets back.

Where do I sign up?


“OK Andrew, we’re looking for someone for this senior developer and they’ve got to be really really great at copying and pasting between Claude and an IDE.

Now we’re going to set up a whiteboard test here and you can demonstrate to us your best copying and pasting.”

“errrr, do I do any actual coding in the job?”

“Well, yes, inasmuch as anyone does these days. It’s mostly copying and pasting though, but hey that’s what coding IS now, right?”

“OK are you ready for your coding test, here it is: what key is COPY? And what key is PASTE?”


But for about ten years, at least, the majority of Google swe rules have been talked about as "just moving protobufs", which is effectively the same. If your job is just plumbing other people's designs for existing products, how fun can it be?


I think you do not understand what A Google SWE actually does


It's a running joke internally. But not always far from the truth. Translating business level protobufs into solution level protobufs is indeed the job of entire teams sometimes.


to a certain extent, this is correct:

“Bad programmers worry about the code. Good programmers worry about data structures and their relationships.” — Linus Torvalds


My experience with AI so far is exactly the opposite. It can help me with finding configurations to set up software packages, with syntax of programming languages that I use only occasionally so am not too familiar with, or it can give suggestions for improvement for code that I have written. It's like having a personal assistant.

But the interesting work: thinking about software architecture, about implementation strategies for large projects, or about finding a good debugging strategy for bugs in large code bases, that work is still for me.


In other words, AI is just a replacement for google search and Stack Overflow, automated and served as an IDE plugin...


Also serves as a manual on steroids, where you can find example of library usage for your problem.


Or a false manual, with wrong or far from optimal examples...


It's such a good manual too, that it will make up entries for whole libraries if they don't exist. So convenient to waste time discovering those cases!


This might be a harsh thing to say, but I don't think the talent at Amazon is top notch because (from everything I've heard anyway) Amazon is such a nightmarishly dystopian place to work that, well, if you have a choice you probably avoid working there? I say that with the recognition that there are always diamonds in the rough, but I'm not sure this really comments on AI usage at a place that treats their employees well.

(That said I'm all for more dystopian stories so we can get past this AI-replacing-coders fad)


Amazon pays really well. Amazon also has a number of interesting projects (e.g. a number of contributors to the rust compiler are employed by Amazon to work on rustc.) It also looks nice enough on a resume to give people a nice stepping stone towards even better opportunities.


> Amazon pays really well

maybe 10 years ago. HR is like a rabid dog, fighting for every dollar.


They, the HR and bean counters of the world, don't see the big picture. It's optimize optimize optimize, but then they don't consider what they're actually trying to achieve.

Typically if you squeeze every penny, you end up with shit. You can do it, but those costs don't just - poof - disappear. They might, and often do, become more difficult to measure. So if you're the measuring guy, it might look great to you.


Humans are fungible, typically -- or at least that's the thought process of management. Sometimes management are literally removing an unnecessary department or need to merge departments because of market conditions -- and they really don't have time to get into the nitty gritty about how that's going to affect your job.

But also humans are messy apparently, so rather than try to introspect about what went wrong to make themselves a better manager, or improve their hiring process, or figure out how to remove bad managers, they punt: "Humans are messy. Metrics are actionable. Humans aren't metrics. shrug."

Humans can be replaced, sure, but you can't replace a surgeon with a landscaper -- but that's not going to stop some managers from trying!


its hard to provide the best customer experience when i have to sell stock in order to pay rent in the GSA, and have to pay out the ass for prescriptions because our garbage health insurance doesnt cover anything.


I really like a lot about Amazon. Friendly and creative people, good projects with high ambiguity and autonomy. The bad parts are the penny-pinching, careless and unaccountable senior leaders who just wants to exploit everyone and everything. But really, aside from things like RTO, they aren’t too impactful on the day to day.


There are probably some spots with interesting engineering problems or projects. But overall I agree, I don't want to work for big tech.

As for developing becoming more factory-like. Not sure about that, you have the same leverage as a developer and you can just work for the other guy.


Will AI also start developing creative tools such as new VST plugins or photoshop filters? What about low-level low-latency tools needed for some industries or for aerospace. I guess at some point we won't need so many humans to run our Kubernetes clusters or maintain our WordPress sites but won't there always be something to do that pushes the boundaries of human needs and desires that can't be done by AI?


> won't there always be something to do that pushes the boundaries of human needs and desires that can't be done by AI?

No why would there be? Unless you are spiritual, there isn't any reason any of the physical processes that make up human thought can't be done artificially, probably much more efficiently. Society needs to confront the myth that automation is going to always open up more jobs that need human labor. It's comforting for people who hate the idea of UBI or other safety nets that people can keep "retraining". Eventually there's going to be nothing to retrain to (at least of nothing of economic value)


I keep meaning to experiment with vibe-coded VSTs. The shell and UI parts should be easy to automate, because they're basically boilerplate. But I doubt AI knows enough about DSP to design an incredible new reverb algorithm, or is an expert on bandwidth-limited oscillator design.

Or maybe it is? When I have some time, I'll find out.


I'm sure it could replicate any existing algorithm or riff on combinations, but a good plugin is made carefully by hand by an artisan who knows what they are looking to achieve, who knows what to listen for. I'm skeptical that a pure AI algorithm would be anything but plagiarism on existing designs.


You might be interested in this video, which is about a physical guitar pedal someone made using a chatgpt-based design. Same idea but all analog.

https://www.youtube.com/watch?v=J-PTzq1bv9M


Fast, good, cheap; pick two. It's the single immutable law in software development and the one thing management has been trying to circumvent for decades. This is just their latest incantation.


Hard to feel bad for any dev that agrees to work at Amazon.

They’ve opted into shitty working conditions for years and played a pretty big part in spreading those conditions to other places.

I have found that former Amazon employees can get jobs at other FAANG companies. But smaller/medium companies that compete on culture rather than pay, don’t tend to want to hire those folks.


> The Amazon engineers said that managers have encouraged them to use A.I. to help write one-page memos proposing a solution to a software problem and that the artificial intelligence can now generate a rough draft from scattered thoughts.

Why generate that memo at all - maybe scattered thoughts would be enough and the AI could help managers understand them. Then again, if the goal is to replace the engineers and not the other way round, then this makes sense.


My hot take is if you’re writing a memo or a spec for internal consumption, using AI is an anti-pattern. The doc isn’t the important product, the act of writing the doc is, because that’s how you clarify your own thinking.

We know AIs are fine for bouncing ideas off of, but they’re not senior-level architects yet.


I am optimistic longer term, pessimistic near term

What needs to happen is the education of "junior programmers" needs to be revamped to embrace generative AI. In the same way we embraced google or stackoverflow. We're at a weird transition state where the juniors are being taught to code with an abacus, while the industry has moved on to different tools. Generative AI feels taboo in education circles instead of embraced.

Now there will eventually be a generation of coders just "born" into AI, etc, and they will do great in this new ecosystem. Eventually education will catch up. But the cohort currently coming up as juniors will feel the most pain.


What needs to happen is the education of "junior programmers" needs to be revamped to embrace generative AI.

No they don't. They need to actually learn how to use their brains first.


You raise a good point. Ba k in the early 90s when i started with programming, my source of knowledge were magazines with code, a "programming with C/C++" book, and a lot of time.

Then the internet came, and it felt like 'cheating'.

Then forums came and it felt like cheating, Then SO, and so on and so forth.

Now AI is eating the [software] world, and to a lot of people, it feels like cheating. I am just amazed of what i can build.

In 10-15 years software will become a commodity, along with books/stories and maybe even music/art. I don't know how it looks like. But darn im excited to be here to experience it.


I was there for those changes too (I taught myself programming a couple years before we got internet), and I disagree that the internet and SO felt like cheating. No matter what resources you had, you could never get very far if you didn’t understand what your code was doing.

That’s no longer true. And that democratizes these skills, which I agree could be a great thing.

But do you agree that it’s important for kids to learn to think critically and systematically? Because it’s super hard to stay motivated to learn those things when LLMs do that for you (and you’re too young to tell when they’re doing a bad job of it).


>Generative AI feels taboo in education

Nah, only the teachers feel this, Gen AI is extremely popular in students. In fact, you would look weird if you don't use Gen AI for school work.

What we need is teaching them how to use gen AI effectively.


And how to use forklift in a gym to lift weights.


I'm optimistic long long long term, but long term our industry is screwed.

The current LLM based "AI" isn't good enough and we're already seeing way to many unable to code without the assistance of an AI agent. Sure, many of these people couldn't code at all before, or only very poorly, but at least their output was limited. We're producing way to much code (and to much content in general). The heavy leaning into AI at this point is going to set us back 10 - 15 years, for a short term profit. It's the dotcom bubble all over again in that respect. Way to many unskilled people are producing garbage code, and there aren't enough skilled people around to fix it, because the output volume is to high.

> Google recently told employees it would soon hold a companywide hackathon in which one category would be creating A.I. tools that could “enhance their overall daily productivity,” according to an internal announcement. Winning teams will receive $10,000.

If it's really that great, why the competition? Shouldn't this happen pretty organically? Companies are pushing "AI" hard, why to hard, it's not yet there where it can realistically deliver what is expected on the business side. I think even Google developers know this, but hey, $10,000 is $10,000.

I'm very concerned that we eroding trust, safety and quality long term, for a short term profit. It's not that LLMs can't be helpful, save money or improve quality, but you have to be a fairly skilled developer to get those advantages safely.


I agree. I always wondered at what point I will feel old and that I can't keep up with technology. Afterall I grew up with Internet. But this might be it.


As a potential solution, do you think formal/semi-formal software development education (undergrad programs, colleges/polytechnics, dev bootcamps, etc) should lean super heavily into AI? To the extent that it's not just "use ChatGPT to help you complete this assignment" but rather "complete this assignment using *only* ChatGPT: you're not allowed to write any of the code by yourself".


CS degree programs have never been about learning to code. They are learning about computers, data structures, machine structures, algorithms. The code was always done on your own time at least that's how my school did it. I never had a class in "Java" or "Python" or "C" or any other languge, that was always incidental to the particular course. I could have used ChatGPT (had it existed) or hired a friend to write the code but that wouldn't help me on the exams (written on paper, at that time). Dev bootcamps? Yeah they should probably be leaning hard into AI as that's just what junior devs are going to be asked to do from here on out.


You should try interviewing a few people fresh out of college. That'll change your opinion incredibly quickly.


With or without LLMs, the culture of Amazon would eventually lead to all of their positions feeling like warehouse work.

Source: I worked there.


I'm wondering whether most other white collar jobs will turn like this as well. I see lots of articles that have been at least partially written by AI - so there goes most of your average journalist's job. Then there's stuff like marketing and communications, where your output is mostly text and media. Again, something AI can mostly handle based on initial requirements.

Personally I find babysitting AI quite boring. It's easy to just stop caring about the quality of the output when one just is wage slaving it, and the process itself is no longer satisfying.


> One Amazon engineer said his team was roughly half the size it had been last year, but it was expected to produce roughly the same amount of code by using A.I.

I am sick of these verbose articles that boil down to nothing basically. What the f does it mean to "produce code"? Like are we just churning out LoCs daily just for the sake of doing so?


As expected, shallow analysis from people who don’t understand our craft and/or are all too happy to feed a hype train.


How is the impact of this assessed? How is system quality / performance changed? Is there an increase in high severity defects as more code gets pushed out more quickly?


Feels like this could be part of a broader shift towards dis-empowering knowledge workers.

The cog in machine effect has always been there in the corporate world, but somehow it feels like the technique has been refined in the last couple of years.


The solution is a no-commercial use enhanced GPL license. Corporations are actively using our open source against us, so we have to fight back.

All these narratives about user freedom, for any purpose etc. are just propaganda these days.


You know, the point of AI in coding seems to be so that you can code in English, instead of a formal language. (And for some reason we're pretending like these formalisms are hard and the people that understand them evil gatekeepers, to which I'd say: nope to both propositions. It has never been easier to learn how to code.)

The thing I don't understand is why anyone thinks this is an improvement. I think anyone that's written code knows that writing code is a lot more fun than reading code, but for some reason we're delegating to the AI the actual enjoyable task and turning ourselves into glorified code reviewers. Except we don't actually review the code, it seems, we just go on with bug ridden monstrosities.

I fail to see why anyone would want this to be the future of code. I don't want my job to be reviewing LLM slop to find hallucinations and security vulnerabilities, only to try to tweak the 20,000 word salad prompt.


> I fail to see why anyone would want this to be the future of code.

IME there's an inverse relationship between how excited the person is about AI coding and their seniority as an engineer.

Juniors (and non-coders) love it because now they can suddenly do things they didn't know how to do before.

Seniors don't love it because it gets in their way, causes coworkers to put up low-quality AI-generated code for peer review, and struggles with any level of complexity or nuance.

My fear is that AI code assistants will inadvertently stop people from progressing from Junior --> Senior since it's easier to put out work without really understanding what you're doing. Although I guess I could have said the same thing about Stack Overflow 10 years ago.


It's a very different thing from SO, in my opinion, because SO involves human discussion. One of the most valuable, educational things about using SO is the comments underneath some answers saying that it's a bad idea, and explaining why. This critical reflection is completely missing from LLM coding. Good suggestion, horrible suggestion – there's no distinction that would be noticed by someone not already experienced in the topic.

I think the key to understanding why people want this is that those people care about results more than the act of coding. The easy example for this is a corporation. If the software does what was said on the product pitch, it doesn’t matter if the developer had fun writing it. All that matters is that it was done in an efficient enough (either by money or time) manner.

A slightly less bleak example is data analysis. When I am analyzing some dataset for work or home, being able to skip over the “rote” parts of the work is invaluable. Examples off the top of my head being: when the data isn’t in quite the right structure, or I want to add a new element to a plot that’s not trivial. It still has to be done with discipline and in a way that you can be confident in the results. I’ll generally lock down each code generation to only doing small subproblems with clearly defined boundaries. That generally helps reduce hallucinations, makes it easier to write tests if applicable and makes it easier to audit the code myself.

All of that said, I want to make clear that I agree that your vision of software engineering Becoming LLM code review hell sounds like… well, hell. I’m in no way advocating that the software engineering industry should become that. Just wanted to throw in my two cents


If you care about the results you have to care about the craft, full stop.


Probably the most unfortunate thing is that the whole AI garbage trend exposes how little people care about the craft leading to garbage results.

As a comparison point I've gone through over 12,000 games on Steam. I've seen endless games where large portions of it are LLM generated. Images, code, models, banner artwork, writing. None of it is worth engaging with because every single one has a bunch of disjointed pieces shoved together combined.

Codebases are going to be exactly the same. A bunch of different components and services put together with zero design principal or cohesion in mind.


> (And for some reason we're pretending like these formalisms are hard and the people that understand them evil gatekeepers, to which I'd say: nope to both propositions. It has never been easier to learn how to code.)

I am not a professional programmer, and I tend to bounce between lots of different languages depending on the problem I'm trying to solve.

If I had all of the syntax for a given language memorized, I can imagine how an LLM might not save me that much time. (It would still be helpful for e.g. tracking down bugs, because I can describe a problem and e.g. ask the AI to take a first pass through the codebase and give me an idea of where to look.)

However, I don't have the syntax memorized! Give me some Python code and I can probably read it, but ask me to write some code from scratch, and LLMs I would have needed to dive into the language documentation or search Stack Overflow. LLMs, and Claude Code in particular, have probably 10x'd what I am capable of, because I can describe the function I want and have the machine figure out the minutia of syntax. Afterwards, I can read what the produced and either (A) ask it to change something specific or (B) edit the code by hand.

I also do find writing code to be less enjoyable than reading/editing code, for the reason described above.


No one have the language syntax memorized, unless you're working with the language daily. Instead we store patterns, and there isn't a lot (checkout the formal grammar for any programming language). For any C like language, they overlap a lot, the difference are mostly in syntax minutia (which we can refresh in an afternoon with a reference) and the higher abstractions (which you learn once, like OOP, pattern matching).

Generally you spend 80% of the time wrangling abstractions, especially in mature project. The coding part is often a quick mental break where you just doing translation. Checking the syntax is often a quick action that no one mind.


> Instead we store patterns, and there isn't a lot

That's kind of what I mean by "syntax". For example, "how do I find a value that matches X in this specific type of data structure?" AI is very good at this and it's a huge time saver for me. But I can imagine how it might be less helpful if I did this full time.


That's a good workflow. But in this case I just have a somewhat complete book about the language, a book and some web search open. Because I often seek for more complete information without the need for prompting and checking if the information is correct.


I mean, I'm not a mathematician but I don't expect that I should be able to write a proof.

You talk of memorizing syntax like its a challenge but excluding a small number of advanced languages no programmer thinks syntax is hard. But if you don't understand the basics how can you expect to be able to understand if the solution an LLM presents is decent and not rife with bugs (security and otherwise)

I guess my issue is people are confusing a shortcut with actually being able to do the thing. If you can't remember syntax I don't really want your code anywhere I care about


I personally find debugging and fixing code to be the most rewarding.

That way you know you're (usually) strictly making an improvement.


I know I'm making an improvement by features I've shipped to customers. Bugs prevent me from shipping more features until they're fixed. I do not enjoy bugs.


>This shift from writing to reading code can make engineers feel as if they are bystanders in their own jobs. The Amazon engineers said that managers have encouraged them to use A.I. to help write one-page memos proposing a solution to a software problem and that the artificial intelligence can now generate a rough draft from scattered thoughts.

This feels like we are forcing people who rather look at code to start talking in plain language, which not every dev likes or is proficient in.

Devs won’t be replaced by AI. Devs will be replaced by people that can (and want to) speak to LLMs.


Sure, AI makes writing code easy, but code review remains a bottleneck. The individual coders are finding ways to use AI to improve their workflows, but as long as organizational culture remains stuck in the old ways of reviewing code, the overall throughput will not improve significantly. That's been my personal experience at Amazon. I get significant pushback when I submit a CR more than a few pages long. You need to have high degrees of trust within a team for this to work at scale, and that's very rarely the case in my group.


> I get significant pushback when I submit a CR more than a few pages long.

Hum yeah, because it's insanely hard to properly review a CR that's more than a few pages long?


Username checks out. Bio even more so. Don't want to be oncall when a LGTM 1000-liner got dumped on prod.


Stuck in the old ways of reviewing code? In other words, what you're suggesting should be the new way is to simply accept the LLM's code salad and push it to production as soon as the pipeline lights up green?

I want to know if I can use an LLM during problem solving at an interview for SDE jobs at Amazon, Google and Microsoft who have boldly claimed lot of their work is done by LLMs. I will be able to point out the algorithm and explain how it works including the time complexity, but I will need the LLM first to solve it.

Will SDE interviews change ? Are these companies gearing up to let AI engineers in ? I highly doubt this is ever going to happen.

There is a quote by buffet that I think applies to a lot of scenarios not just investing : 'to be fearful when others are greedy and to be greedy only when others are fearful.'

ML IoT 5G blockchain etc. so many technologies are great that had their gaussian curve moment during greed. But these things take a back seat after that.


Amazon is going down. 20 years ago they had excellent customer service, now they are violating EU laws and cheat the gmp developers out of a CPU replacement:

https://gmplib.org/

Granlund's gcc optimizations probably save Amazon millions in electricity each year. But evidently they don't care about real programmers.


> 20 years ago they had excellent customer service, now

... you will use them anyway because, customer service or no, there’s a good chance you don’t have a choice that doesn’t cost half again as much. (Regional availability may vary.)


They aren’t cheaper anymore. The “real” branded stuff is usually more expensive on amazon than straight from the manufacturers website. The “cheap” fake aliexpress stuff is just that charged at a premium. You can find the same exact product images even on the same products on ebay from probably the same merchants listing on both marketplaces.

It has changed dramatically over the past 5 or so years into this.


At least for stuff like electronics, random plastic household items etc EU Amazon isn't particularly cheap. Branded stuff costs about the same in many smaller stores, and random Chinese no-name brands can be bought from Aliexpress cheaper.


For the EU, use this search engine:

https://geizhals.eu/

Amazon is not nearly the cheapest or most reliable one for hardware.


I am sorry but this is insane cope, Amazon is still on-top everywhere, also in Europe.


In terms of usage, maybe, in terms of service, probably not. Often the biggest dogs are not the best. They have inertia and marketing, and that's why they're the big dogs. Not because they're providing the best, or the cheapest, good or service.


Depends on how local you go, there are local alternatives that are bigger in their domain for almost every domain in most parts of Europe. Amazon is much much smaller outside of USA and there are plenty of alternatives with better and cheaper service.


Also not true in Switzerland. Amazon was ranked #4 overall, #2 in tech, based on sales in 2024, with a massive distance to the leaders. It's not that popular around here, and with current U.S. boycotts I expect it to drop further this year.

Nonsense. Literally the first random example I chose:

https://geizhals.eu/supermicro-h13ssl-n-bulk-mbd-h13ssl-n-b-...

  Cheapest price: EUR 675
  Amazon price:   EUR 875
The vendors with the cheapest price have good customer reviews as well, unlike Amazon, which has terrible ones.


In Poland and neighboring countries, Amazon has some serious competition from Allegro.


I'm afraid software development will soon be about fixing code hallucinated up by AIs - and standups, retrospectives, reviews, demos, plannings etc.


Yes, it's all the wrong way around. I'd rather have an AI attending standups and planning meetings for me. It's supposed to understand my code perfectly, so why can't it report on it for me?


The trend towards increasingly low-quality, mass-produced code started long before AI but AI has greatly accelerated the trend and taken it to levels not previously possible. Short-term, this is destroying software dev jobs, long-term, it's going to create a lot of software dev jobs when people have to maintain or rewrite the mess (unless AGI can fill the gap). It will create a lot of jobs in cyber-security also because complex code, rushed out hastily, with less context awareness, opens the door to more potential vulnerabilities.

It's interesting how the trend which took over manufacturing is now also taking over software development. This trends towards low-quality, mass-production is taking over everything.


This article features a study claiming 25% productivity gains form co-pilot. I wonder what a similar study would say about developers with and without access to internet search. In the study, it seems, as I expected, less experienced people found more value from it. My experience has been AI tools are more helpful when I don't know the domain, and then their benefit declines as I familiarize.

I also looked at the study and noticed a few aspects that were surprising:

(a) some of the 95% CIs crossed zero, meaning no benefit is a possible interpretation in figures 6 and 7

(b) did anyone account for what happens when two workers are in different experimental groups and sit next to each other? I imagine it was likely common people in the experimental group to use co-pilot queries for their friends.

(c) for experienced workers, the mean value is actually negative (lol) in the unweighted data in Figure 7

There are a lot of other subtleties in the interpretation of this paper.

"...developers who were less productive before the experiment are significantly more likely to accept any given suggestion...."

...curious lol

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4945566


Yeah out of all the studies I’ve seen about actual productivity gains from LLMs the median estimate is probably in the low single digits (possibly including 0). Not exactly earth shattering yet.


> Companies seem to be persuaded that, like assembly lines of old, A.I. can increase productivity. A recent paper by researchers at Microsoft and three universities found that programmers’ use of an A.I. coding assistant called Copilot, which proposes snippets of code that they can accept or reject, increased a key measure of output more than 25 percent.

> The engineers said that the company had raised output goals and had become less forgiving about deadlines.

There are two issues this article brings to mind:

1. Feels like we are back when lines of code was a measure of productivity.

2. We’ve heard this tune before, and the problem we have now is that you don’t understand what you didn’t write. Software development is in large part about understanding the system from the highest level to its smallest details, and this takes away a key part of how our world works, in favor of a “cattle not pets” view of code.

Now, if you don’t expect your programmers to have an understanding of the system they built, and you treat code as disposable, then you’ll center around a world where folks aren’t trained to learn a system, and I don’t see that as a world that is compatible with increased reliance on A.I.


20 years ago, when I first started working, Wipro and HCL Tech had been talking about making the outsourcing model into "factory-like assembly lines." This fizzled out very quickly back then, but I guess it is now coming to fruition :-|


So who will be held accountable if things break? "Sorry dear customer, it's not our fault, this code was AI generated and it worked in our tests."


What has been the experience with using these tools for folks on HN.

I'm generally amazed at the JS stuff it can come up with from scratch, but asking it to implement features is often not quite great.

Far more often it can't even use the damn edit tool given to the LLM! I have to take some of the "excrement" left behind; may be do a git reset; and then carefully fix the code.

I'm not surprised the managerial class has gotten high on the kool-aid of replacing workers...


My experience has been poor. My last attempt was with copilot. I gave it the structure of a psql table and a DTO and asked it to create java code to write to the dB using a prepared statement. It failed and the errors were hard to spot. Not sure if it saved time. That was 4o.


I worked at Amazon Robotics low level software team, as a contractor, for a few months. This article sounds about right. Crazy pace of work with no room for reflection or — especially — groundbreaking invention, is given. And that was a year ago, when AI was totally optional.

The engineers are smart and fast folks, which helps a great deal. But don’t seek true creativity and deep expertise at Amazon, unless we’re talking about legacy L6+ (or likely even L7+) level engineers who get to have the leeway needed. The picker robots I had been working with were created, originally, by a Netherlands company, not Amazon. Amazon’s job was scaling it all up for their massive warehouse operation. It was a bunch of hacks on top of hacks. Heck, not even Harel state machines (which is kind of what one would expect to have on a properly designed event-driven robot that’s not overly tied to its PLC implementation, where things can be modularized, in and out, as needed). Just a bunch of switch statements and a menagerie of globals, with only two key people intimately familiar with most of its key code structure, one of whom decided to leave the group in frustration.

In fairness, the mess didn’t originate just at Amazon, but there never was a technical push at Amazon to have the initial work done in the right way. Not the software part anyway.

I can see that the craziness not only didn’t abate but has gotten worse. The money was good but it’s not worth the stress and other frustrations IMO, unless you’re a noncontractor and at least a technical program manager or above, where you won’t be coding as much as reviewing others’ code anyway.

Otherwise, I can totally understand why people just work there for a couple years, to get their resume looking good and then leave


This is good, it means they'll optimize for this kind of coding to have less coders to save pennies here and there and please the shareholders short term, and get left behind in the long term once an old school startup that tries to do everything better comes up.

Of course I'm fantasizing and we sadly don't live in this kind of world (Amazon or Google would stomp you if you come for their users) but hey a man can dream.


The place known for:

- High degree of warehouse injuries (https://www.nytimes.com/2024/12/16/business/economy/amazon-w...)

- Making its delivery drivers piss in bottles (https://www.forbes.com/sites/katherinehamilton/2023/05/24/de...)

- Illegally busting unions (https://www.thenation.com/article/activism/amazon-union-bust...)

- Forcing people back into their offices on five day RTO (https://www.businessinsider.com/amazon-5-day-rto-mandate-pol...)

Is now making its white-collar employees' life resemble ...warehouse work? Unthinkable!


Ruby/Dawson in SLU even looks like a warehouse on the inside, complete with chain link fence decor, exposed concrete floors with a splash of paint, exposed ceiling infrastructure, and spartan decorations. And it was built over a decade ago.


"In a memo to employees in April, the chief executive of Shopify, a company that helps entrepreneurs build and manage e-commerce websites, announced that “A.I. usage is now a baseline expectation” and that the company would “add A.I. usage questions” to performance reviews."

Usage of a tool becoming a performance metric is a clear sign that the tool isn't doing what it's promised to do. This is managers desperately trying to prevent their investment in GenAI from blowing up in their faces.

This is as clear an admission as any that the supposed productivity or quality gains from LLM coding aren't showing up in the results. If a tool is effective, management generally doesn't have to micromanage and force employees into using it.


Seems to me like FANG is primarily sacrificing QA to increase productivity, with AI as the reason. Will be interesting to see if they too are 'peeing their pants' (warm and nice -> cold and not so nice), or if it actually will be sustainable.

LLM's were trailed as customer service bots, which backfired (for the most part). Now they're exploring different areas.


May be it's easier in FAANG because a lot of the work comes from the sheer (and unnecessary IMO) complexity of the stack/tools.

In fact, a common meme within Google is that Engineers are mere plumbers routing protos from one place to another.

(Given how brittle these systems are though, I'm not sure if I trust LLMs to be better; but they may be more suited for dealing with un-interactive delayed-feedback baroque systems that are the norm in such places.)


The same complaints come from my doctor friends - they are now also being run as a factory where each procedure has a certain allotted time and they need to fit each patient into it. Like you have 10 minutes for a GP task, 15 minutes for psychotherapy, 3 minutes for X-Ray etc. They are complaining about it a lot, it makes them feel like robots.


The Amazon site search and at least some product pages seem to be down right now, although I'm hesitant to say that because I don't remember Amazon ever breaking.

I'm getting 503s ("Sorry! Something went wrong on our end") on searches and "We're sorry, an error has occurred. Please reload this page and try again" on product pages.

The AIs should probably fix that.


Sounds like employees have just started turning into accountability sinks, you let the AI do all the work, give the employee not enough time to really read / understand it all and then blame the employee who “pushed” it for not catching a bug instead of being accountable yourself if you let an AI code it all.


> A recent paper by researchers at Microsoft and three universities found that programmers’ use of an A.I. coding assistant called Copilot, which proposes snippets of code that they can accept or reject, increased a key measure of output more than 25 percent.

Study paid by MS finds that MS tool is improving productivity...


Amazon’s entry level SDEs are hired to answer pages and grind sev2s at 3am. AI seems set to make that worse, not better.


What even are most programmers at Amazon doing? It seems like all the interesting bits (the MVP of Amazon's major products) was developed long ago. All the low-hanging fruit is gone so the task is now focused on squeezing the few remaining drops of efficiency out of the stone.


That’s most major tech companies at this point. The fun was building closer to the mvp of the core product. Once that’s done, it’s just corporate druggery with computers.


Yeah, most of the things in IT are just in maintenance mode now.

You need way way less people for that.


Until you join a FAANG you simply cannot imagine the complexity of the systems and how much has not been automated and how much there is to build.

I've been in Amazon for close to a decade, and I constantly think "I can't believe X hasn't been automated in the 30 years that Amazon has existed and is still done on Excel".

Most engineers will work on new features for at least half of the year, and I personally work on brand new projects or at least features constantly.


Doing work that helps another team will rarely result in getting any credit and is a great way to get stuck in the bottom of the stack rank (top grading or whatever they call it now). So automating anything that integrates across the org chart won't happen because the incentive structure is against it.


That's not true. In fact OP1 and OP2 [1] in Amazon are in majority an intake process for projects requested by other teams. I'd say only half of them come from the team itself.

[1] https://docbarraiser.com/annual-business-planning-what-the-o...


working in a warehouse, just recently built over the last 4 years, and the pallet management is done via excel. im pretty sure the scanner app is just a frontend.

horrified to think the totes are done the same way.


Last few drops of efficiency out of the aws stone scales to millions of dollars of savings potentially.


From the business side of the house, I only ever look at it as what can do the right job for the right money to very quickly deliver continuous value to customers. In an employee base: Experience around is always good for growing and improving my people, but it isn't necessarily tied directly to the work to be done. The shift I feel is happening is how I place the value here. Where I can imagine I would land if I was actively running a startup right now is making sure I have amazing production teams (whatever that looks like, age, location, LLM powered or otherwise) book ended by great people growers as all cogs start to squeak if they're not oiled, and experience is the best lubricant on teams, you need both.


I'm a software engineer at Amazon and I'm directly and indirectly involved a lot in programs related to GenAI tooling.

I can safely say this article is bullshit. While there are a lot of programs ongoing to allow builders to adopt GenAI tooling, and while there is definitely a lot of communication ongoing around those programs, nobody is, at all, forced to use any of the AI tools. None are installed by default or enabled by default anywhere, and everyone is free to completely ignore them.

That said, is there an increase in expectations? Yes. But that's just normal Amazon in an employer's market, and has nothing to do with LLMs and GenAI.

The comparison to Microsoft where we can witness in public the .Net maintainers fighting with the shit code generated by Copilot on their repos is ridiculous. Amazon is probably one of the companies pushing the least and being the most prudent about GenAI adoption.


If you truly believe what you’re saying, then you’re uninformed as to what is going on outside your own team. And just because it’s not happening in your team does not mean it isn’t happening.

Q is installed by default in all browsers on Amazon laptops now, and literally cannot be uninstalled. If you don’t have it installed in your IDE, you get a non-dismissible popup nagging you to install it until you do. Many teams are being told they must use AI every single day (some VPs have sent out org-wide emails saying that AI must be used), and engineers have to tell their managers how they are making use of it day-to-day. In my org, OP1 docs must include at least one section about how the team will increase use of AI. Hackathons aren’t allowed to happen anymore unless they are AI-themed. I could keep going. Amazon is absolutely forcing AI usage, and the article undersells how egregious it is.


Somehow you're dismissing my post saying it's based on my own anecdotical experience while providing your own anecdotical experience and just hearsay.

There is absolutely no company-wide mandate to use GenAI. If some SDM is pushing it on his SDEs, that's an outlier and on that person alone.


> There is absolutely no company-wide mandate to use GenAI.

There is an STeam goal for adoption and usage. There is a QS dashboard for SDMs to see statistics on their org's adoption and abandonment rates. There is BT guidance being propagated out to VPs and directors on how to roll out programs. As placardloop said, there was a mandatory OP1 FAQ question on GenAI usage.


Your comment stated “nobody is, at all, forced to use any of the AI tools”, which is entirely false - I’m looking at an email in my inbox from my VP right now saying everyone must use AI every day.

You said “None are installed by default or enabled by default anywhere” - this is also false. I’m looking at an installed by default (and uninstallable) AI browser addon on my work laptop right now.

It’s not hearsay, you’re either commenting in bad faith or you’re just clueless about what’s going on at your own company.

> There is absolutely no company-wide mandate

And now you’re just moving the goalposts.


This kind of thinking is exactly why a lot of Americans buy only foreign made vehicles.The American auto industry is a failure propped up by government subsidies. Please don't use them as an example of anything good.


I'm seeing the speed up and it's forcing out people with disabilities who are able to do the work at the previous slower speeds. I wonder if there are any solutions to this or if people like me are just expected to be walmart greeters.


'“It’s more fun to write code than to read code,” said Simon Willison, an A.I. fan who is a longtime programmer and blogger, channeling the objections of other programmers. “If you’re told you have to do a code review, it’s never a fun part of the job. When you’re working with these tools, it’s most of the job.”'

I don't recognise this at all, not in myself, neither in the well functioning teams I've been a part of.

Executing and reviewing code is the job. That's the core thing we do. Writing code is just a fluent activity, like walking or absentmindedly spinning a pen between the fingers.


Yeah, I agree. I said that because I've heard other programmers say it - which I guess is hinted at by the "channeling the objections of other programmers" part of that quote.


I love that I never even jumped on the whole "internet" thing when it arrived to software development in the 90s. After that, dodging blockchains and later AI was even simpler.


>At Amazon, some coders say their jobs have begun to resemble warehouse work

Only because the people responding have never done warehouse work.


As someone who packed pears for my first job in a warehouse at 15 1/2 years old for 10 hours a week, this comment is great. Working in a warehouse for pennies is about as worse as it gets. Now, as a well paid software engineer, a very bad day doesn't even come close to that experience.


This seens like a recipe for disaster on the making. Well, thats why we need brave people. To show us the way, or serve as a cautionary tale.


I note less hostility to AI/LLM etc in the comments on this post than in previous months. Is the needle shifting?


Can't think of a post I've seen that was in any way dubious or critical of AI that wasn't also flagged. HN is heavily biased toward AI.


There were many more throwaway accounts before (I think) the new moderation efforts. Many people cannot speak freely on this topic.


> “It would be crazy if in an auto factory people were measuring to make sure every angle is correct,” he said, since machines now do the work.

Am I accountable for the angles or not? Don’t think so! My job is just to bolt it on.

Is a developer not accountable for the code, regardless of how it was created?


While the writing seems to be on the wall for entry level developer positions, there's going to have to be a pathway for Amazon and others to have junior devs become senior devs, and be able up fill in whenever the machines fail.


This type of thing really incentivizes founding a startup. If you are a very senior developer, who needs the corporate stupid factory? You can do a lot of work with half the people and work for yourself.


Why would any company want to pay a software engineer to do repetitive / factory like work? It would be better to automate that and not have any engineers from the company's perspective.


Because engineers train the AI that'll replace them.


Probably better just to replace them now if they are just doing simple / repetitive tasks.


I think the author simply wanted to write this story now. Like when in the last 10 years could you not have found a handful of Amazon devs to corroborate this claim?


I'm glad I'm a programmer and not just a coder.


I'm so glad I wound down all of my AWS accounts and pay ~$1.5 for them per month. I'll close the rest this month.


Hasn't that always been the case at Amazon?


Indeed, I have heard that Amazon is a grindhouse many times before. It's understandable, in a sense: unlike other big tech companies, Amazon lives and dies on low margins. They're more like Walmart than Google, in terms of the business model at least. So they need to push devs hard and get lots of productivity out of them.


Well if the majority of developers are mostly using web-based technologies such as JS, TS, HTML and CSS, and all you're doing is modifying the site and shuffling around elements + a11y addition, etc that is something an AI can do in seconds.

Seems indeed to be like Warehouse work, which is why Web developers will be the first to be affected by AI.

Doesn't matter if you are "senior" or "staff" in Web development. AI is already senior staff level in that.


But they mean general warehouse work, or “Amazon Warehouse” work?

Because it’s quite different…


At the end of the day, coding with AI is still coding.

LLMs need to get better at it - more precision, less hallucination, larger context windows, and stricter reasoning. We also need a clearer and more domain-specific vocabulary when working with them.

In short LLM will end up as one more programming language - but one that’s easier for humans to understand, since it operates using the terminology of the problem domain.

But humans will still need to do the thinking: the what-ifs, the abstractions, the reasoning.

That’s where AI becomes unsettling - too many people have been trained not to think at all.


Investing in AI companies is the best thing one could do now


You just need to figure out which companies capture the value, if any. My bet is on Amazon, funny enough.


we'll see if this increased return for companies will stay there in the long term, or create unforeseen consequences.


ah yes, how the kiva robot lifts a mobile shelf full of Codes and brings them to me for my inspection. I use my left hand to reach for the Code I need and place it in the tote, because my right shoulder is injured from repeating this motion.

The tote goes onto a conveyor where it is shot forwards, backwards and sideways, all around the warehouse until it reaches the compiler station. There, another worker will lint each Code with a lint roller and compile them by selecting the appropriately sized cardboard box.

Next, this executable bundle will be linked with its customer, possibly by means of a plane or 18-wheeler truck, with last-mile linking performed by a smaller vehicle suitable for urban traffic ...


This is why I quit Google many years ago


Do they have timed toilet breaks?


> Andy Jassy ... said working faster was essential because competitors would gain ground if Amazon doesn’t give customers what they want “as quickly as possible” and cited coding as an activity where A.I. would “change the norms.”

Bruh, no competitor is going to set up an army of datacenters and warehouses on the gargantuan scale of amazon anytime soon... What do customers want? How about fixing your search function accuracy and policing the disgusting influx of scam products with fake reviews!!! Ugh. How about you get AI working on THAT jackknife...


AWS has competition in Azure and Google Cloud, and AWS is the money maker for Amazon, everything else is incidental.


> AWS is the money maker ... everything else is incidental.

Considering Amazon kills and steals all kinds of small businesses and ideas that would be a godsend for Americans.


> Bruh, firstly, no competitor is going to set up an army of datacenters and warehouses on the gargantuan scale of amazon anytime soon.

Google, Microsoft, OpenAI, Oracle, and xAI covet what AWS has. They are absolutely coming for AWS.


AWS? Possibly. They have a huge head start and haven't lost much ground yet. The warehouses and physical delivery? No way in hell. Any competing physical infrastructure would take a decade from the first shovel hitting the ground today.


AI coding assistants are a force multiplier. It can multiply good things and bad things equally and without discrimination.


> Three Amazon engineers said that managers had increasingly pushed them to use A.I. in their work over the past year.

Three?


If it compiles, ship it. Even if it's subtly broken.....

I, for one, look forward to the contracting gigs fixing garbage code. :)

The pull back on this will be brutal.


Welcome to the club.

Have coders really psyopped themselves into thinking their job is somehow that much more special than the rest simply because it paid better due to temporarily market conditions?

I thought that was a joke where everyone was in on it, not that they were serious. I assumed it was clear we're all replaceable cogs in a machine, baring a few exceptions of brilliant and innovation people.


> Have coders really psyopped themselves into thinking their job is somehow that much more special than the rest simply because it paid better due to temporarily market conditions?

Yes. We don't need to pay $$$ for simply changing elements on a page or adopting the next web framework to replace another. The hype in many web technologies that lots of developers that have fell for also contributed to the low quality of the software that you use right now.

All of this work to pay developers to over-engineer inefficient solutions and to give a false sense of meaningful work contributed to the "psyop" of how highly inflated their salaries were to do their jobs in the ZIRP era.

And AI has shown which developer jobs it is really good at, and it is consistently good at web developer roles.

So I'd expect those roles to be significantly less valuable.


AI is good at web developer roles because that’s what has been most prevalent in the training material.


Software development is a lot more than web development.


This begins the golden age of creative generalists (until that's also in the chopping block).


Joke's on us; this is going to rapidly drain what little creativity there is in places like Amazon as they rely increasingly more on a tool that at best intelligently regurgitates what it learned/gleaned/stole from the internet. As the AI models are further trained on their own slop, the signal to noise ratio will only get worse; this has already been noted in studies.


Is it? We’re more specialized than ever imo.


Yes, this is the nature of being spoiled rotten.

I am not a software engineer and have never felt stable in my 30 year career.

It always feels like the rug could get pulled out from under me at any time.

So what? It is still better than working in coal mine. It is still more interesting than working at a gas station.

Hard to feel sorry for people basically complaining the work ping pong table doesn't have quite the quality ping pong balls they were expecting.

It is an interesting mix of being both super elitist and completely infantile at the same time.


Bugs. Bugs everywhere.


I said it before and I'll say it again: it's high time we got the taste of our own medicine. Getting people out of jobs has been the main selling point of our industry since its first days, this is what we've collectively been doing for decades. Do I want my job to be automated right in front of my eyes? Not really. Do I see some poetic justice in the whole thing? Absolutely.


Software development is the most automated career in the history of all time.

Assemblers, Linkers, Compilers, Copying, Macros, Makefiles, Code gen, Templates, CI & CD, Editors, Auto-complete, IDEs are all terms that describe types of automation in software development.

LLM-generated code is another form of automation, but it certainly isn't the first. Right now most of the problems are when it is inappropriately used. The same applies to other forms of automation too - but LLMs are really hyped up right now. Probably it will go through the normal hype cycle where there'll be a crash, and then a plateau where LLMs are used to drive productivity but expectations of their capability are more aligned to reality.


In french the two fields that are Computer Science and Information Technology are under the same moniker: "informatique", a portmanteau of information and automatic.

The whole field is about automating yourself out of a job, and it's right in the name.


Contraction of "information" and "electronics" according to this site:

https://grodiko.fr/informatique?lang=en

German site claims that "Informatik", which is practically the same, is a contraction of "Information" and "Mathematik":

https://www.pctipp.ch/praxis/gewusst/kam-begriff-informatik-...


Larousse dictionary disagrees:

https://www.larousse.fr/dictionnaires/francais/informatique/...

This cites the same person (Philippe Dreyfus) but with "automatic":

https://www.caminteresse.fr/societe/quelle-est-lorigine-du-m...

> Il faut attendre 1962 pour réentendre parler d’informatique dans les médias. « Informatique » est en effet le terme utilisé pour la première fois par un scientifique français pour désigner le traitement automatique des données. Il s’agit de Philippe Dreyfus, fondateur de la société SIA, acronyme pour « Société d’Informatique Appliquée ».

And then enters the dictionary in 1966.

In 1957 in Germany, Karl Steinbuch describes Informatik as Automatische Informationsverarbeitung.


Gloating about hardships was and is a great way to ensure things will get worse for workers as efficiency and automation increases.

Another option would be to join forces to collectively demand more equitable distribution of the fruits of technological development. Sadly it doesn't seem to be very popular.


> Sadly it doesn't seem to be very popular

Strange enough the people that have the most to gain from keeping things the same, are really successful at convincing the masses who have the most to benefit from change in this regard to vote against it.

https://pjhollis123.medium.com/careful-mate-that-foreigner-w...


Certainly back when I worked in IT, the people I worked with were mostly very much anti-union. I didn't hear much anti immigrant talk back then but I've been retired for a while.


There has been a lot of anti-H1B talk for a long time. And complaining about work being outsourced to India or where ever.


I don't really believe in unions either. But I do believe in a good balance between capitalism and socialism (and welfare systems, employee rights etc). I don't believe in the market solving everything.

The problem I have with unions is that they can be too unreasonable. They're too much on the other side, they're too hardline just like the ultracapitalists/neoliberals but on the other side. In a good system we wouldn't have to fight for our rights because we'd already have them anyway.


Too unreasonable and yet now the issues warned against by labour movements for decades are coming to pass.

You have fallen for capitalist propaganda. Time to re-evaluate.


I'm very socialist actually. I just think that with a good socialist government, unions are not needed as such because the national law already protects workers' rights. I find it a bit 'off' that each class of labour has to fight for their own rights separately. That shouldn't be needed (and really isn't, where I live). It also causes a lot of uproar, see France for example where a strike is just tuesday. Employee rights are strong, but the public is heavily impacted all the time. Better to just handle this on the government side. This is why I was a member of the socialist party but not of the union in my workplace (I'm no longer a member because I moved countries and can't vote where I live).

Note: I'm not living in the US obviously :)

I do say a balance because of course we're not living in a communist state. So even with a socialist government there is still capitalism. Just not unrestrictedly so as it is in the US.

I'm not sure how it works in the US but in our company the union is many bitching about stupid stuff like breastfeeding rooms (when there are no women who bring their babies to work anyway - they just work from home after their 6 months maternity leave). All our basic rights are already sorted. We can't work too many hours, we have unlimited sick leave (though of course validated by a doctor for long absences), we're entitled to a lot of money when fired etc. But this is all national law level stuff. Not industry level.


> I'm very socialist actually. I just think that with a good socialist government, unions are not needed as such because the national law already protects workers' rights.

Having strong and independent unions is how you keep a good socialist government. Almost anytime you hear “With a good government, you don’t need <whatever>”, you are hearing a recipe for guaranteeing that good government is an exceptional, transitory state. If your society isn't prepared for bad government, it will have one sooner than you’d like, and it will be difficult to dislodge it.


> I'm not sure how it works in the US but in our company the union is many bitching about stupid stuff like breastfeeding rooms (when there are no women who bring their babies to work anyway - they just work from home after their 6 months maternity leave)

A true committed exclusionary socialist.


No really, we had such a room built and it is literally never used. Because there is nobody who brings their baby to work (which would be exceptionally impractical anyway - who'd want to be sitting on a phone call beside a crying baby??).

The bad thing is they converted the welfare room for this which was used all the time :(


Breastfeeding rooms are used for expression and I wonder how you'd know if they're used or not.


I was told by building services when I complained about the removal of the wellness/meditation room. They have presence sensors. They also told me that nobody actually asked for it besides the union idiots who are not even in our city. They're just ticking boxes.

I really needed that place because of the move to hotdesking so I'm constantly sitting besides blabby sales people. Formerly we had an IT floor where people knew concentration is sometimes needed. So I'd go there to sit in silence and de-stress for a while.

But I have to say the company is good otherwise, I told them about my difficulty and the H&S people let me work from home much more than others.

I hate the way companies are going back from full remote to hybrid hotdesking though because that is the worst of both worlds.


I can imagine why the breastfeeding rooms were empty if your comments are indicative of your attitude in work.

Nothing about your example is an overreach of unions. In fact, it is a perfect example of the value of organised labour.

In honour of recent comments by dang, I won't be as direct as I'd historically be and instead invite you to think about - in the grand scheme of things - how accessibility, including expressing mothers, may be a societal and absolute good.

As a secondary exercise, maybe it's worth thinking about the ethics of presence sensors.


This comment reeks of liberalism and illustrates why liberalism doesn’t work.

You claim you’re trying to balance individualism and collectivism but don’t actually support things that make collectivism work so you end up de facto supporting individualism.

Its a way to support individualism while allowing people to feel extra good about themselves for supporting collectivist ideas, on paper.


I'm very socialist, and anti-liberal (at least, in the economic sense of liberalism).

But where I live we just have strong labour rights from the government so individual unions fighting for each type of labour's rights are not needed as much. Sometimes they are, when there are specific risks like chemicals that they work with. But for overall "not get taken advantage of" stuff, it's just not needed so much.


I’m glad the people in your country allow good things to happen. That’s definitely not the case in all countries and you need collective power to get nice things for workers.


Yes! In a fair world, we would all be very excited about jobs becoming 30% more efficient or fully automated with AI, because that surplus would come back to us. Producing more with less work is a good thing! It's only been distorted to become an economic anxiety become the gains from automation are so unevenly distributed.


Join tech workers coalition


Savoring suffering is uniquely hideous, and one of the grand hallmarks of almost every facet of the decline of the US. It's a clear, bright sign of the death of one's humanity, and the foundation of all evil.

Is that dramatic? No.

More specifically: Things can be inevitable and also horrible. It is not some kind of cognitive dissonance to care about people losing their livelihoods and also agree that it had to happen. We can structure society to help people, but instead we hate the imaginary stereotypes we've generalized every conceivable grouping of real people into, politics being the most obvious example, but we do it with everything, as you have.

The electrician doesn't "deserve" punishment for "advocating" away the jobs replaced by electricity. The engineer doesn't "deserve" punishment for "advocating" away the jobs replaced by engineering. A person isn't an asshole who deserves his family to suffer because he committed his life to learning to write application servers, or whatever.


If in the process of automating away people's livelihoods, you do not simultaneously advocate for the destruction of the capitalist system that ties the well-being of people and their families intrinsically to those jobs, then you do in fact deserve what's coming to you. History shows that retribution against class traitors is not limited to financial hardship, either.


I have been in this industry for some time, and over the years I have only seen more people being glued to electronic devices, not less.

It might have been a selling point, but the status quo is that we are inventing new jobs faster than phasing out old ones. The new jobs aren't necessarily more enjoyable, though, and there are no more smoking breaks.


not necessarily. the economies of scale might be increasing as dev productivity increases. the goal of many businesses being to do more with less.


The goal of all American business is exactly the same: maximize the return of profits to the shareholders at large. It is in fact, the law. Do more with less is a natural consequence of this.


> is in fact, the law

It is not in fact law in the US.


https://corpgov.law.harvard.edu/2021/12/01/dodge-v-ford-what...

If directors consistently chose "do less with more" - they'd certainly lose under virtually any legal standard?

Edit: I guess it's technically Michigan law, but as far as I'm aware is de facto? Even Aronson v. Lewis wouldn't allow that. (IANAL)


The problem I see is less that of losing jobs, but the fact that the jobs get less enjoyable, less deep work, more mindlessness and less reflection, and possibly also the quality of the produced software decreasing.

Modern AI encroaches upon what software engineers consider to be interesting work, and also adds more of what they find less enjoyable — using natural language instead of formal language (aka code) for detailed specification — which creates a conflict that didn’t previously exist in software technology.


To be fair, all of corporate has grated against deep work and well written software way before the dawn of LLMs. Tech debt is one of the things that modern software engineering produces in spades.


LLMs can't do creative or unique work. They're really only useful for boilerplate, which is the tedious part.


I may be wrong, but I think every job creates wealth overall (or it would not exist), and that software engineering has been making some jobs more efficient and others not necessary, and then the wealth which formerly had to be employed where those jobs were inefficient, or had to exist at all, is then employed elsewhere.

If you are the person who lost their job, you get all the downside.

Overall, over the whole of the economy, the entire population, and a reasonable period of time, this increasing efficiency is a core driver of the annual overall increase in wealth we know as economic growth.

When an economy is growing, there is in general demand for workers, and so pay and conditions are encouraged; when an economy is shrinking, there is less demand than supply, and pay and conditions are discouraged.


> Overall, over the whole of the economy, the entire population, and a reasonable period of time, this increasing efficiency is a core driver of the annual overall increase in wealth we know as economic growth.

This is only true while wealth inequality is decreasing, which it is not.


> This is only true while wealth inequality is decreasing, which it is not.

If everyone is becoming better off, but at different rates such that there is increase in inequality, then everyone experiences economic growth.

Thought experiment.

We have two people, one with 1000 wealth one with 100.

We have 10% growth per year.

So we see;

1000 -> 1100 -> 1210 -> 1331 100 -> 110 -> 121 -> 133.1

Difference in wealth;

900 -> 990 -> 1089 -> 1198

The ratio of wealth remains 10:1, but the absolute difference becomes larger and larger.

I do not know, and I would like to know, how numbers for wealth inequality are being computed.


Do you see in your example that the working class becomes exponentially more disadvantaged over time? Both in relative and absolute terms.


The person the headline refers to is a webdev. What job is that getting rid of?


Web dev doesn’t exist in a vacuum.

Web dev for e-commmerce displaced brick and mortar retail. Web dev for streaming displaced video rentals and audio sales.


Web devs are a tiny proportion of the employees needed for e-commerce.


Without them (and mobile devs, though there’s increasing cross-over), e-commerce doesn’t get done.

Ergo, web devs are directly contributing to the outcomes that e-commerce enables.


Ok, but so is everyone involved with building the fulfillment centre, the sorting machines, the roads for delivery, the trucks, the railways...

If it sounds like I'm including a lot of jobs, it's because every non-service job in the history of the post-industrial revolution economy has revolved around making things more efficient. Software development is not some uniquely evil domain.


I agree. I was just answering the upthread question, which seemed to imply that web devs have no part in it.


Cashiers, some officials, a lot of the “personal contact with a customer” gets transferred to web. I am not complaining, just answering the question.


Same goes for a truck driver, road builder, railway worker, etc.

FWIW, I spent many years as a cashier. It's not something I find inherently more valuable to the world. If we could trust people not to steal, we wouldn't need them.


[flagged]


Could you please stop crossing into personal attack? Your account has unfortunately been doing this a lot and we ban accounts that do that.

I don't want to ban you, so if you'd please review https://news.ycombinator.com/newsguidelines.html and stick to the rules when posting here, we'd appreciate it.


Nothing in my comment was a personal attack, only a reflection of GP's own behaviour. Go ahead and ban the account if calling out these comments as written is unwelcome. But beware the behaviour you welcome by leading it unchallenged.


I don't want to ban you! I hope to persuade you to stop using personally pejorative language in your HN comments, which you've unfortunately been doing a lot of.

https://news.ycombinator.com/item?id=44089951

https://news.ycombinator.com/item?id=44089808

https://news.ycombinator.com/item?id=44088236

https://news.ycombinator.com/item?id=44088105

https://news.ycombinator.com/item?id=44040448

https://news.ycombinator.com/item?id=44040666

If you don't like the phrase "personal attack" we can call it something else, but the point is you can't treat other commenters this way on HN, regardless of how wrong anyone is or you feel they are.


If those comments are the problem rather than the messages to which they're responding, then ban me. But again, beware the discourse you welcome, because bad ideas deserve to be challenged.


I think you're probably overestimating the provocation in other people's comments and underestimating the provocation in your own. This is something nearly everyone does.

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...


If you'd like to rephrase this in a less asshole-y way that takes into account my other replies to this comment thread, I might consider replying.


Please don't respond to a bad comment by breaking the site guidelines yourself. That only makes things worse.

https://news.ycombinator.com/newsguidelines.html


I don't actually see which guideline it's breaking. It's a serious offer.


[flagged]


I don't think it's webdev stopping you from getting a customer facing role.


Agreed, although it's slightly beside the point. The goal of building tools and robots has always been to alleviate work. And this is fine. There's still plenty of stuff to do if machines work for us to give us basic housing and food.

Now what needs to be done is to give back the profits to everyone, inclusively, as a kind of "universal basic income", so that we all enjoy it together, and not just the billionaires


Hmm on the other hand, there isn't much resistance against genAI in software development (unlike other creative industries) because ours is founded in collaboration and continuing others' work. It's where open source came from, and the use of libraries. Using stackoverflow was never frowned on. AI is just the same but more efficient. Nobody invents the wheel from scratch.

It will change the job yes but it also can mean the job can go in new directions because we can do more with less.


There isn't much open resistance because most of open source developers are bought and paid for. So they continue the path of destruction in the hopes that they will not be obsolete.

This is naive of course. Once you have identified yourself as corporate servants (like for example the CPython developers) the companies will disrespect you and fire you when convenient (as has happened at Google and Microsoft).


These things are waves. First they will fire a bunch of people, but no company can grow through constant downsizing. Then they'll start to imagine to do new things they can do with the new skills and invest in that.

It will cause a displacement of job types for sure. But I think it means change more than decline. When industrialisation happened, lots of factory workers were afraid of their jobs and also lost them. But these days nobody even wants to do a menial factory job, slaving away on the production line for minimum wage. In fact most people have a far better life now than the masses did before industrialisation. We also had the computer automation that made entire classes of jobs obsolete. Yet it's almost impossible to find skilled workers in Holland now.

And companies need customers with purchasing power. They can't replace everyone with AI because there will be nobody left with money to sell things to. In the end there will be another balance. The interim time, that's the difficult part. Though it is temporary, it can really hurt specific people.

But I don't see AI as a downward spiral that will never recover. In the end it will enable us to look more towards the future (and I am by no means an "AI bro", I think current capabilities of AI have been ridicuously overhyped)

I think we need to redraw society too to compensate. Things like universal basic income, better welfare etc. Here in Europe we already have that but under the neoliberal regimes of the last 20 years (and the fallout from the American banking crisis), things have been austerised too much.

In America this won't happen as it seems to go only the other way (very hardline capitalism, with a fundamentalist almost taliban-like religious streak) but well, it's what they voted for.


How is it poetic justice? Were we advocating for automation?


Email made the corporate mailroom obsolete and the letter carrier.


So would it be poetic justice when an electrician gets laid off?


More like if the motor maker gets replaced by a motorised machine. And yeah, that’s poetic.

The electrician is more like the person laying fibre optic cable.


Did the inventor of the SMTP protocol get laid off?


> How is it poetic justice? Were we advocating for automation?

Yes? I know I did, still do, and will continue to at least.


Are you the official representative of all laid off people? Like a religion or something


Please don't comment like this on Hacker News.

Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.

https://news.ycombinator.com/newsguidelines.html


You were writing all the code for automation.


This is the message between the lines of much of the anti-dev schadenfreude, but actually spelling it out makes it obvious: it's not true.

Only a minority of dev jobs are automating people out of work. There are entirely new industries like game dev that can't exist without it.

Software development has gained such a political whipping-boy status, you'd be forgiven for forgetting it's been the route to the middle classes for a lot of people who would otherwise be too common, weird or foreign.


The kind of automation I write is stuff that wouldn't get done if person had to do it. But with automation, it becomes possible and profitable.


I think a lot of genAI coding efficiency will have the same property: costs will go down to the point where things that couldn’t be done affordably in software in 2020 will be affordable in 2030. That could well result in a net increase in tech employment.


I'm surprised people aren't using it to churn out javascript framework after framework - is that not the done thing anymore?


No one is free until everyone is free. Thinking you’re insulated because you do white collar work is idiotic.


Some software "engineers" are discovering they never actually did engineering and are surprised.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: