> But why bring in the whole s3 library if you’re going to use one single endpoint?
This is a bit of a reach. There's no reason to assume that the entire project would only be using one endpoint, or that AI would have any trouble coding against the REST API instead if instructed to. Using the official SDK is a safe default in the absence of a specific reason or instruction not to.
Either way, we're already past the point of demonstrating that AI is perfectly capable of writing correct pseudocode based on my description.
> Coding speed is very much not a bottleneck in Emacs.
Of course it is. No editor is going to make your mind and fingers fast enough to emit an arbitrarily large amount of useful code in 0 seconds, and any time you spend writing code is time you're not spending on other things. Working with AI can be a lot harder because the AI is doing the easy parts while you're multitasking on all the things it can't do, but in exchange you can be a lot more productive.
Of course you still need to have enough participation in the process to be able to maintain ownership of the task and be confident in what you're committing. If you don't have a holistic view of the project and just YOLO AI-generated code that you've never looked at into production, you're probably going to have a bad time, but I would say the same thing about intern-generated code.
> I’m more for less feature and better code.
Well that's part of the issue I'm raising. If you're at the point of pushing back on business requirements in the interest of code quality, that's just another way of saying that coding speed is a bottleneck. Using AI doesn't only help with rapidly pumping out more features; it's an extremely useful tool for fixing bugs at a faster pace.
IMO, useful code is code in production (or if it’s for myself, something I can run reliably). Anything else is experimentation. If you’re working in a team, code shared with others are proposal/demo level.
Experimentation is nice for learning purpose. Kinda like scratch notes and manuscripts in the writing process. But then, it’s the editing phase when you’re stamping out bugs, with tools like static analysis, automated testing, and manual qa. The whole goal is to have the feature in the hand of the users. Then there’s the errata phase for errors that have slipped trough.
But the thing is code is just a static representation of a very dynamic medium, the process. And a process have a lot of layers. The code is usually a small part of the whole. For the whole thing to be consistent, parts need to be consistent with each other, and that’s when contract cames into place.The thing with generated AI code is that they don’t respect contracts. Because of their nature (non deterministic) and the fact that the code (which is the most faithful representation of the contracts can be contradictory (which leads to bugs).
It’s very easy to write optimistic code. But as the contracts (or constraints) in the system grew in number, they can be tricky to balance. The rescourse is always to go up a level in abstraction. Make the subsystems blackboxes and consider only their interactions. This assumes that the subsystems are consistent in themselves.
Code is not the lowest level of abstraction, but it’s often correct to assume that the language itself is consistent. Then it’s the libraries and the quality varies. Then it’s the framework and often it’s all good until it’s not. Then it’s your code and that’s very much a mistery.
All of this to say that writing code is the same as writing words on a manuscript to produce a book. It’s useful but only if it’s part of the final product or help in creating it. Especially if it’s not increasing the technical debt exponentially.
I don’t work with AI tools because by the time I’m ok with the result, more time have been spent than if I’ve done the thing without. And the process is not even enjoyable.
Of course; no one said anything about experimentation. Production code is what we're talking about.
If what you're saying is that your current experience involves a lot of process and friction to get small changes approved, that seems like a reasonable use case for hand-coding. I still prefer to make changes by hand myself when they're small and specific enough that explaining the change in English would be more work than directly making the change.
Even then, if there's any incentive to help the organization move more quickly, and there's no policy against AI usage, I'd give it a shot during the pre-coding stages. It costs almost nothing to open up Cursor's "Ask" mode and bounce your ideas off of Gemini or have it investigate the root cause of a bug.
What I typically do is have Gemini perform a broad initial investigation and describe its findings and suggestions with a list of relevant files, then throw all that into a Grok chat for a deeper investigation. (Grok is really strong at analysis in general, but its superpower seems to be a willingness to churn on sufficiently complex problems for as long as 5+ minutes in order to find the right answer.) I'll often have a bunch of Cursor agents and Grok chats going in parallel — bouncing between different bug investigations, enhancement plans, and one or two code reviews and QA tests of actual changes. Most of the time that AI saves isn't the act of emitting characters in and of itself.
This is a bit of a reach. There's no reason to assume that the entire project would only be using one endpoint, or that AI would have any trouble coding against the REST API instead if instructed to. Using the official SDK is a safe default in the absence of a specific reason or instruction not to.
Either way, we're already past the point of demonstrating that AI is perfectly capable of writing correct pseudocode based on my description.
> Coding speed is very much not a bottleneck in Emacs.
Of course it is. No editor is going to make your mind and fingers fast enough to emit an arbitrarily large amount of useful code in 0 seconds, and any time you spend writing code is time you're not spending on other things. Working with AI can be a lot harder because the AI is doing the easy parts while you're multitasking on all the things it can't do, but in exchange you can be a lot more productive.
Of course you still need to have enough participation in the process to be able to maintain ownership of the task and be confident in what you're committing. If you don't have a holistic view of the project and just YOLO AI-generated code that you've never looked at into production, you're probably going to have a bad time, but I would say the same thing about intern-generated code.
> I’m more for less feature and better code.
Well that's part of the issue I'm raising. If you're at the point of pushing back on business requirements in the interest of code quality, that's just another way of saying that coding speed is a bottleneck. Using AI doesn't only help with rapidly pumping out more features; it's an extremely useful tool for fixing bugs at a faster pace.