Nowadays i come to my conclusion that "ease of maintenance" is the most important feature to have in a project. More critical is only that the project in itself is valuable enough, so many engineers optimize things that shouldn't exist in the first place.
Easy to maintain is not only about keeping something alive with minimal effort over longer periods of time. It also plays a pivotal role for scalability in any direction. Adding more engineers/teams, adding more unforseeable features, iterating quickly in general, surviving more traffic/load, removing technical bottlenecks, ... everything is so much easier when the project is easy to work with and maintainable.
Personal observation from a heavy LLM codegen user:
The sweet spot seems to be bootstrapping something new from scratch and get all the boilerplate done in seconds. This is probably also where the hype comes from, feels like magic.
But the issue is, that once it gets slightly more complicated, thinks break apart and run into a dead end quickly. For example yesterday I wanted to build a simple CLI tool in Go (which is outstandingly friendly to LLM codegen as a language + stdlib) that acts as a simnple reverse proxy and (re-)starts the original thing in the background on file changes.
AI was able to knock out _something_ immediately that indeed compiled, only it didn't actually work like intended. After lots of iterations back-and-forth (Claude mostly) the code balooned in size to figure out what could be the issue, adding all kinds of useless crap that kindof-looks-helpful-but-isn't. After an hour I gave up and went through the whole code manually (few hundred lines, single file) and spotted the issue immediately (holding a mutex lock that gets released with `defer` doesn't play well with a recursive function call). After pointing that out, the LLM was able to fix it and produced a version that finally worked - still with tons of crap and useless complexity everywhere. And thats a simple straightforward coding task that can be accomplished in a single file and only few hundred lines, greenfield style. And all my claude chat tokens of the day got burned for this, only for me at the end having to dig in myself.
LLMs are great to produce things in small limited scopes (especially boilerplate-y stuff) or refactor something that already exists, when it has enough context and essentially doesn't really think about a problem but merely changes linguistic details (rewrites text to a different format ultimately) - its a large LANGUAGE model after all.
But full blown autonomous app building? Only if you do something that has been done exactly thousands of times before and is simple to begin with. There is lots of business value in that, though. Most programmers at companies don't do rocket science or novel things at all. It won't build any actual novelty - ideal case would be building an X for Y (like Uber For Catsitting) only but never an initial X.
Personal productivity of mine went through the roof since GPT4/Cursor though, but I guess I know how/when to use it properly. And developer demand will surge when the wave of LLM-coded startups get their funding and realize the codebase cannot be extended anymore with LLMs due to complexity and the raw amount of garbage in there.
> After lots of iterations back-and-forth (Claude mostly) the code balooned in size to figure out what could be the issue, adding all kinds of useless crap that kindof-looks-helpful-but-isn't. After an hour I gave up and went through the whole code manually (few hundred lines, single file) and spotted the issue.
That's what experience with current generation LLMs looks like. But you don't get points for getting the code in/by the LLM looking perfect, you get points for what you check-in to git and PR. So skill is in realizing the LLM is running itself in circles, before you run out of tokens and burn an hour, and do it yourself.
Why use an LLM at all if you still have to do it yourself? Because it's still faster than without, and also that's how you'll remain employable - by covering the gaps that an LLM can't handle (until they can actually do full blown autonomous app development, which is still a while away, imo).
The most important signal of a degree is a written proof that a person somehow managed to show up to something for a longer timespan without being physically forced to constantly. Basically a minimum baseline of reliability and self-organization.
The reality is _a lot_ of applicants without any proof like that don't last long and thus wasting lots of company resources (including rehiring). Of course there are many very great folks without a degree and also a lot of idiots with a degree, but I learned to trust that heuristic until a strong signal otherwise pops up.
the concept of freedom itself kind of is hard even in a stricter context like politics.
like, I cannot freely choose to do X today, because (iE: capitalism environment) demands me to do certain other things that bring in money to live comfortably. I could make a tradeoff somewhere, but that very tradeoff itself limits actual freedom.
therefore to maximize actual freedom, we're looking at eliminating constraints that limit freedom, like the need to make money somehow, which would be a dramatic society change not everyone agrees to.
its hard. and wonky. so everyone uses it only through a personal belief lense.
Currently I'd flat out refuse to give any sort of prize to musk, that could be a tipping point for his mental "stability" completely breaking down. The last few years really had a toll on him. Fallen from idol to conspiracy rightwing idiot crashing his companies more and more.
I think money is simply like mass: it has some kind of gravitational force, so people are drawn to it (the more money the stronger the pull), and it attracts itself (money tends to accumulate like a black hole), if left unchecked.
The only way to interfere with it is by using other unrelated forces, like kindness, just like we can move masses around with electromagnetism.
I jump between ChatGPT, Claude 3.5 and Grok2 on a regular basis, and I'd say its not "worse" in the general sense. I find it much better on anything sentiment-analysis related or for shortcutting customer research, and image generation is really good.
You should really have a look at Marx. He literally predicted what will happen when we reach the state of "let machines do all work", and also how this is exactly the way that finally implodes capitalism as a concept. The major problem is he believed the industrial revolution will automate everything to such an extend, which it didn't, but here we are with a reasonable chance that AI will do the trick finally.
It may implode capitalism as a concept, but the people who most benefit from it and hold the levers of power will also have their egos implode, which they cannot stand. Like even Altman has talked about UBI and a world of prosperity for all (although his latest puff piece says we just can't conceive of the jobs we'll have but w/e), but anyone who's "ruling" the current world is going to be the least prepared for a world of abundance and happiness for all where money is meaningless. They won't walk joyfully into the utopia they pandered in yesteryear, they'll try to prop up a system that positions them as superior to everyone else, and if it means the world goes to hell, so be it.
(I mean, there was that one study that used a chatbot to deradicalize people, but when you're the one in power, your mental pathologies are viewed as virtues, so good luck trying to change them as people.)
1. we do not live underwater, so lots of tools and stuff possible (iE: fire).
2. we have usable hands with thumbs. This allows much better tooling. Ravens only have their beak and are comparatively handicapped in technical developments.
3. we do not die during/after reproduction. This allows for accumulation of knowledge instead of every generation starting to learn from scratch. That’s octopus handicap.
4. we have reasonably enough raw strength and compensate weaknesses with social groups and tools/weapons to eliminate basically all competition where we lived. And managed to increase food production for exponential population growth, again some way of tooling ultimately.
… so it for me boils down to the fact that we, among several intelligent species, have been the lucky ones to leverage tooling in the most efficient way.
And especially ravens/crows are absolutely in the ballpark of humans when it comes to intelligence. They have actual language to communicate facts to others, they have social structures/rituals similar to ours, they make tools to accomplish goals, even with multi-step-plans. Heck, they even „use“ other animals like wolves: since they are not capable of opening a fresh deer corpse to get to the meat, they search for the nearest wolf and guide him to the corpse for win-win food sharing… some wolf packs have even been seen essentially protecting raven flocks/eggs, so ravens literally can have kinda dogs.
I agree, it’s likely many subtle, accumulated factors.
I think the interesting question, that we may never be able to answer, is whether crow intelligence is such that they could have developed on our trajectory as well, had things been different. Or is our intelligence, while similar to crows and other animals in many ways, fundamentally different in some way? Or was early hominid intelligence middle of the pack and it was just the other factors you mentioned that gave it the edge it needed?
Easy to maintain is not only about keeping something alive with minimal effort over longer periods of time. It also plays a pivotal role for scalability in any direction. Adding more engineers/teams, adding more unforseeable features, iterating quickly in general, surviving more traffic/load, removing technical bottlenecks, ... everything is so much easier when the project is easy to work with and maintainable.
reply