> Software engineers been automating our own work since we built the first assembler.
The declared goal of AI is to automated software engineering entirely. This is in no way comparable to building an assembler. So the question is mostly about whether or not this goal will be achieved.
Still, nobody is building these systems _for_ me. They're building them to replace me, because my living is too much for them to pay.
Automating away software engineering entirely is nothing new. It goes all the way back to BASIC and COBOL, and later visual programming tools, Microsoft Access, etc. There have been innumerable attempts to do somehow get by without need those pedantic and difficult programmers and all their annoying questions and nit picking.
But here's the thing: the hard part of programming was never really syntax, it was about having the clarity of thought and conceptual precision to build a system that normal humans find useful despite the fact they will never have the patience to understand let alone debug failures. Modern AI tools are just the next step to abstracting away syntax as a gatekeeper function, but the need for precise systemic thinking is as glaringly necessary as ever.
I won't say AI will never get there—it already surpasses human programmers in many of the mechanical and rote knowledge of programing language arcana—but it it still is orders of magnitude away from being able to produce a useful system when specified by someone who does not think like a programmer. Perhaps it will get there. But I think the barrier at that point will be the age old human need to have a throat to choke when things go sideways. Those in power know how to control and manipulate humans through well-understood incentives, and this applies all the way to the highest levels of leadership. No matter how smart or competent AI is, you can't just drop it into those scenarios. Business leaders can't replace human accountability with an SLA from OpenAI, it just doesn't work. Never say never I suppose, but I'd be willing to bet the wheels come off modern civilization long before the skillset of senior software engineers becomes obsolete.
> Modern AI tools are just the next step to abstracting away syntax as a gatekeeper function, but the need for precise systemic thinking is as glaringly necessary as ever.
Syntax is not a gatekeeper function. It’s exactly the means to describe the precise systemic thinking. When you’re creating a program, you’re creating a DSL for multiple subsystem, which you then integrate.
The subsystem can be abstract, but we usually define good software by how closely fitted the subsystem are to the problem at hand, meaning adjustments only need slight code alterations.
So viewing syntax as a gatekeeper is like viewing sheet music as a gatekeeper for playing music, or numbers and arithmetic as a gatekeeper for accounting.
The difference is that human language is a much more information-dense, higher-level abstraction than code. I can say "an async function that accepts a byte array, throws an error if it's not a valid PNG image with a 1:1 aspect ratio and resolution >= 100x100, resizes it to 100x100, uploads it to the S3 bucket env.IMAGE_BUCKET with a UUID as the file name, and retries on failure with exponential backoff up to a maximum of 100 attempts", and you'll have a pretty good idea of what I'm describing despite the smaller number of characters than equivalent code.
I can't directly compile that into instructions which will make a CPU do the thing, but for the purposes of describing that component of a system, it's at about the right level of abstraction to reasonably encode the expected behavior. Aside from choosing specific libraries/APIs, there's not much remaining depth to get into without bikeshedding; the solution space is sufficiently narrow that any conforming implementation will be functionally interchangeable.
AI is just laying bare that the hard part of building a system has always been the logic, not the code per se. Hypothetically, one can imagine that the average developer in the future might one day think of programming language syntax in the same way that an average web developer today thinks of assembly. As silly as this may sound today, maybe certain types of introductory courses or bootcamps would even stop teaching code, and focus more on concepts, prompt engineering, and developing/deploying with agentic tooling.
I don't know how much learning syntax really gatekeeps the field in practice, but it is something extra that needs to be learned, where in theory that same time could be spent learning some other aspect of programming. More significant is the hurdle of actually implementing syntax; turning requirements into code might be cognitively simple given sufficiently baked requirements, but it is at minimum time-consuming manual labor which not everyone is in a position to easily afford.
I won't unless both you and I have a shared context which will tie each of these concept to a specific thing. You said "async function", and there's a lot of languages that don't have that concept. And what about the permissions of the s3 bucket, what's the initial time of the wait time? And what algorithm for the resizing? What if someone sent us a very big image (let say the maximum that the standard allows).
These are still logic questions that have not been addressed.
The thing is that general programming languages are general. We do have constructs like procedure/functions and class, that allows us for a more specialized notation, but that's a skill to acquire (like writing clear and informative text).
square(P) :- width(P, W), height(P, H), W is H.
validpng(P, X) :- a whole list of clauses that parses X and build up P, square(P).
resizepng(P) :- bigger(100,100, P), scale(100, 100, P).
smallpng(P, X) :- validpng(P, X), resizepng(P).
s3upload(P): env("IMAGE_BUCKET", B), s3_put(P, B, (exp_backoff(100))))
fn(X) :- smallpng(P, X), s3upload(P)
So what you've left is all the details. It's great if someone already have an library that already does the thing, and the functions has the same signature, but more often than not, there isn't something like that.
Code can be as highlevel as you want and very close to natural language. Where people spend time is the implementation of the lower level and dealing with all the failure modes.
Details like the language/stack and S3 configuration would presumably be somewhere else in the spec, not in the description of that particular function.
The fact that you're able to confidently take what I wrote and stretch it into pseudocode with zero deviation from my intended meaning proves my point.
To draft a spec like this, it would take more time and the same or more knowledge than to just write the code. And you still won’t have reliable results, without doing another lengthy pass to correct the generated code.
I can create a pseudocode because I know the relevant paradigm as well as how to design software. There’s no way you can have a novice draft pseudo-code like this because they can’t abstract well and discern intent behind abstractions.
I don't agree that it would take more time. Drafting detailed requirements like that to feed into coding agents is a big part of how I work nowadays, and the difference is night and day. I certainly didn't spend as much time typing that function description as I would have spent writing a functional version of it in any given language.
Collaborating with AI also speeds this up a lot. For example, it's much faster to have the AI write a code snippet involving a dependency/API and manually verify the code's correctness for inclusion in the spec than it is to read though documentation and write the same code by hand.
The feat of implementing that function based on my description is well within the capabilities of AI. Grok did it in under 30 seconds, and I don't see any obvious mistakes at first glance: https://grok.com/share/c2hhcmQtMw_fa68bae1-3436-404b-bf9e-09....
I don't have access to the grok sample you've shared (service not available in my region)
Reading the documentation is mostly for gotchas and understanding the subsystem you're going to incorporate in your software. You can not design something that will use GTK or sndio without understanding the core concepts of those technologies. And if you know the concepts, then I will say it's easier and faster to write the code than to write such specs.
As for finding samples, it's easy on the web. Especially with GitHub search. But these days, I often take a look at the source code of the library itself, because I often got questions that the documentation don't have the answer for. It's not about what the code I wrote may do (which is trivial to know) but what it cannot do at all.
Ah, weird, that's good to know. Well here's the code:
import { env } from './env';
import { v4 as uuidv4 } from 'uuid';
import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';
import sharp from 'sharp';
async function retry<T>(fn: () => Promise<T>, maxAttempts: number): Promise<T> {
let attempt = 1;
while (true) {
try {
return await fn();
} catch (error) {
if (attempt >= maxAttempts) {
throw error;
}
const delayMs = Math.pow(2, attempt - 1) * 100;
await new Promise((resolve) => setTimeout(resolve, delayMs));
attempt++;
}
}
}
export async function processAndUploadImage(s3: S3Client, imageData: Uint8Array): Promise<string> {
let metadata;
try {
metadata = await sharp(imageData).metadata();
} catch {
throw new Error('Invalid image');
}
if (metadata.format !== 'png') {
throw new Error('Not a PNG image');
}
if (!metadata.width || !metadata.height || metadata.width !== metadata.height || metadata.width < 100) {
throw new Error('Image must have a 1:1 aspect ratio and resolution >= 100x100');
}
const resizedBuffer = await sharp(imageData).resize(100, 100).toBuffer();
const key = `${uuidv4()}.png`;
const command = new PutObjectCommand({
Bucket: env.IMAGE_BUCKET,
Key: key,
Body: resizedBuffer,
ContentType: 'image/png',
});
await retry(async () => {
await s3.send(command);
}, 100);
return key;
}
The prompting was the same as above, with the stipulations that it use TypeScript, import `env` from `./env`, and take the S3 client as the first function argument.
You still need reference information of some sort in order to use any API for the first time. Knowing common Node.js AWS SDK functions offhand might not be unusual, but that's just one example. I often review source code of libraries before using them as well, which isn't in any way contradictory with involving AI in the development process.
From my perspective, using AI is just like having a bunch of interns on speed at my beck and call 24/7 who don't mind being micromanaged. Maybe I'd prefer the end result of building the thing 100% solo if I had an infinite amount of time to do so, but given that time is scarce, vastly expanding the resources available to me in exchange for relinquishing some control over low-priority details is a fair trade. I'd rather deliver a better product with some quirks under the hood than let my (fast, but still human) coding speed be the bottleneck on what gets built. The AI may not write every last detail exactly the way I would, but neither do other humans.
As I’m saying, for pure samples and pseudo code demo, it can be fast enough. But why bring in the whole s3 library if you’re going to use one single endpoint? I’ve checked npmjs and sharp is still in beta mode (if they’re using semver). Also, the code is parsing the imagedata twice.
I’m not saying that I write flawless code, but I’m more for less feature and better code. I’ve battled code where people would add big libraries just to not write ten lines of code. And then can’t reason when a snippet fails because it’s unreliable code into unreliable code. And then after a few months, you got zombie code in the project. And the same thing implemented multiple times in a slightly different way each time. These are pitfalls that occur when you don’t have an holistic view of the project.
I’ve never found coding speed to be an issue. The only time when my coding is slow is when I’m rewriting some legacy code and pausing every two lines to decipher the intent with no documentation.
But I do use advanced editing tools. Coding speed is very much not a bottleneck in Emacs. And I had a somewhat similar config for Vim. Things like quick access to docs, quick navigation (thing like running a lint program and then navigating directly to each error), quick commit, quick blaming and time traveling through the code history,…
> But why bring in the whole s3 library if you’re going to use one single endpoint?
This is a bit of a reach. There's no reason to assume that the entire project would only be using one endpoint, or that AI would have any trouble coding against the REST API instead if instructed to. Using the official SDK is a safe default in the absence of a specific reason or instruction not to.
Either way, we're already past the point of demonstrating that AI is perfectly capable of writing correct pseudocode based on my description.
> Coding speed is very much not a bottleneck in Emacs.
Of course it is. No editor is going to make your mind and fingers fast enough to emit an arbitrarily large amount of useful code in 0 seconds, and any time you spend writing code is time you're not spending on other things. Working with AI can be a lot harder because the AI is doing the easy parts while you're multitasking on all the things it can't do, but in exchange you can be a lot more productive.
Of course you still need to have enough participation in the process to be able to maintain ownership of the task and be confident in what you're committing. If you don't have a holistic view of the project and just YOLO AI-generated code that you've never looked at into production, you're probably going to have a bad time, but I would say the same thing about intern-generated code.
> I’m more for less feature and better code.
Well that's part of the issue I'm raising. If you're at the point of pushing back on business requirements in the interest of code quality, that's just another way of saying that coding speed is a bottleneck. Using AI doesn't only help with rapidly pumping out more features; it's an extremely useful tool for fixing bugs at a faster pace.
IMO, useful code is code in production (or if it’s for myself, something I can run reliably). Anything else is experimentation. If you’re working in a team, code shared with others are proposal/demo level.
Experimentation is nice for learning purpose. Kinda like scratch notes and manuscripts in the writing process. But then, it’s the editing phase when you’re stamping out bugs, with tools like static analysis, automated testing, and manual qa. The whole goal is to have the feature in the hand of the users. Then there’s the errata phase for errors that have slipped trough.
But the thing is code is just a static representation of a very dynamic medium, the process. And a process have a lot of layers. The code is usually a small part of the whole. For the whole thing to be consistent, parts need to be consistent with each other, and that’s when contract cames into place.The thing with generated AI code is that they don’t respect contracts. Because of their nature (non deterministic) and the fact that the code (which is the most faithful representation of the contracts can be contradictory (which leads to bugs).
It’s very easy to write optimistic code. But as the contracts (or constraints) in the system grew in number, they can be tricky to balance. The rescourse is always to go up a level in abstraction. Make the subsystems blackboxes and consider only their interactions. This assumes that the subsystems are consistent in themselves.
Code is not the lowest level of abstraction, but it’s often correct to assume that the language itself is consistent. Then it’s the libraries and the quality varies. Then it’s the framework and often it’s all good until it’s not. Then it’s your code and that’s very much a mistery.
All of this to say that writing code is the same as writing words on a manuscript to produce a book. It’s useful but only if it’s part of the final product or help in creating it. Especially if it’s not increasing the technical debt exponentially.
I don’t work with AI tools because by the time I’m ok with the result, more time have been spent than if I’ve done the thing without. And the process is not even enjoyable.
Of course; no one said anything about experimentation. Production code is what we're talking about.
If what you're saying is that your current experience involves a lot of process and friction to get small changes approved, that seems like a reasonable use case for hand-coding. I still prefer to make changes by hand myself when they're small and specific enough that explaining the change in English would be more work than directly making the change.
Even then, if there's any incentive to help the organization move more quickly, and there's no policy against AI usage, I'd give it a shot during the pre-coding stages. It costs almost nothing to open up Cursor's "Ask" mode and bounce your ideas off of Gemini or have it investigate the root cause of a bug.
What I typically do is have Gemini perform a broad initial investigation and describe its findings and suggestions with a list of relevant files, then throw all that into a Grok chat for a deeper investigation. (Grok is really strong at analysis in general, but its superpower seems to be a willingness to churn on sufficiently complex problems for as long as 5+ minutes in order to find the right answer.) I'll often have a bunch of Cursor agents and Grok chats going in parallel — bouncing between different bug investigations, enhancement plans, and one or two code reviews and QA tests of actual changes. Most of the time that AI saves isn't the act of emitting characters in and of itself.
Who declared it? Who cares what anyone declares? What do you think will actually happen? If software can be fully automated, then sure SWEs will need to find a new job. But why wouldn't it increase productivity instead and there still are developer jobs, just different.
> The declared goal of AI is to automated software engineering entirely.
Its hardly the first thing that has that as its “declared goal” (i.e., the fantasy sold by to investors to get capital and the media to get attention.)
The declared goal of AI is to automated software engineering entirely. This is in no way comparable to building an assembler. So the question is mostly about whether or not this goal will be achieved.
Still, nobody is building these systems _for_ me. They're building them to replace me, because my living is too much for them to pay.