Hacker News new | past | comments | ask | show | jobs | submit login
I'm Too Old (amazingcto.com)
58 points by KingOfCoders 10 months ago | hide | past | favorite | 73 comments



I feel this becomes more and more relevant again:

https://www.cs.utexas.edu/users/EWD/transcriptions/EWD06xx/E...

Edit: To add on this, I think there is a lot of software that is written in a throwaway fashion (CRUD apps, shell scripts), where using LLMs might beneficial. But anything where correctness actually matters, why would I describe it in natural language only to then check the implementation?

The much more sensible use of LLMs to me is the other way round: creating ad hoc documentation for code that you can even ask questions. But that's probably not fundable by VCs on the same level.


Yes and no (imho). Whether it's some uber technical language/low level, such as Assembly or Swift, an LLM would have no problem 'learning it'. It could take more or less time, but moving bits around and drawing lines is something that a machine can do faster than a human - once the machine learns. I was coding in MQL4. It would take me a couple of hours to make something that an LLM would type up in 30secs with 5mins spent on description/reqs.

And it may make 1000 mistakes before getting it right, but if these 1000 iterations happen in 1ms it is still more profitable than a human needing 10 iterations. And considering that once you teach a machine (ML) the machine will know - forever, while you need to spend dedicated time for each new human that becomes a dev.

> ..sharp decline of people's mastery of their own language

This is the part that scares me about humanity. Perhaps I read a lot, and perhaps I am a believer that words matter. People seem to be replacing everything, dummy-ing down things, racing to the bottom of intellect.

On the original post.. (I'm too old) - I am also at the I'm too old age-range. I am happy to only deal with LLMs as a user. Google tends to know everything about anyone anyway. So as long as my questions are not giving away my private/secrets (i.e. "how should I wash my 5th nipple?") then "it's ok".


Well, I am older than this chap and I don't feel old because of AI. In fact, I love what's happening with coding and AI. It's like having a wonderful buddy who programs with me. It takes away the drudgery of some tasks and shows me new things. I get to learn from the AI systems.

Now, imagine that AIs start replacing more and more of what I do, then I'll have more and more time to think of what I want built and not have to spend so much time on building it. Sounds brilliant to me.

And when I want to I can turn away from the AI tools and handcraft something. For the pleasure of it.


(author here)

As I wrote

"AI is amazing. I’ve trained an AI to detect model railroad locomotives and their types. I’m a daily user of AI to let it write code for me."

But AI will take away all coding. Sure, some people will write code, like some people today use a mechanical typewriter. Like some artists use clay. But most of what happens in computers in the future will not be code but executed, self modifying, self optimizing, AI models.

"more and more time to think of what I want build"

This is not how AI will work. It's not I want something to build, the AI will just do the things. There will no longer be things to "build". We will no longer think of code as something that exists, but things that just happen.

You will say "But I could tell the AI what film to create, and some scenes, and a rough story, and it will create that film" - but what if AI creates much better, more powerful, exciting, better films than those you could imagine? Films no human ever thought about?

Again, like the typewriter, some people will tell the AI fragments of a story to create a film. But most media content will be created by AI, for consumptions, on the initiative of AIs, not on the initiative of humans.

In the mid 90s I wrote a philosophy paper at university about an AI that generates random images (trigger by my first digital camera, a Olympus C-800L) and then interprets them (making some estimations on the speed it could do that, generation and interpretation). That AI has basically seen it all, an alien killing JFK, me on the moon, you and me drinking a beer, and things we could never imagine.

[Edit] Like there people writing new games for 8bit computers today. They exist, but it's a niche.


You're making a lot of assumptions and predictions there. Not to say that you're wrong, but we don't yet know if you're right. My opinion is that we're seeing the shape of the limitations of these LLMs. I use them a lot for coding, it's a sea change. But you still have to understand what you're doing, they don't produce accurate code all the time, and I'm not convinced they'll be able to just insert working code into existing complex systems.

> but what if AI creates much better, more powerful, exciting, better films than those you could imagine?

Well, something to deal with when it can. Right now it cannot do this.


But you still have to understand what you're doing, they don't produce accurate code all the time, and I'm not convinced they'll be able to just insert working code into existing complex systems.

I fully agree. I have been using Copilot for a while and the errors it makes in more complex domains (algorithms and data structures more complex than 'write me a quicksort') are just stupendous. Even at a more basic level, it regularly makes Rust syntax errors that my 10yo daughter has no issues spotting.

It has been really awesome for writing boilerplate code though. Especially with in-context learning, writing one example and then letting the model extrapolate it to other methods, functions, etc. works great and saves me a lot of time.

It's hard to predict the future. But the current best models feel more like a much better IntelliSense (though API hallucination is a serious issue) than something that is going to replace good programmers anytime soon (unless your task is writing boilerplate code).


Everything about the future is based on assumptions.

Nevertheless we talk about the future.


Yes we do, but there are things we can talk about with certainty and things we can talk about with probabilities. I'll certainly get older and die. But AI won't certainly take all coding jobs. It's highly likely to change all coding jobs. We can discuss the changes with a higher degree of interest and thought. The author seems very certain though that AI will take all coding jobs.

A lot of things seem to have a great potential and then peak a lot lower. AI producing very impressive but very short videos gives us a sense that it has the potential to produce whole, coherent films that tell a story.


There will always be public phones. Sure, mobile phones will grow. But they are expensive. We estimate the total demand for mobile phones to peak at 900.000 [0] All public phones? That is a big assumption we can't make with certainty.

The drivers have always been convenience and cost. AI is the ultimate in convenience and cost.

I guess there will be programmers needed for legacy systems in Fortran an Java. But new companies 5 years ago didn't use Fortran or Java for new projects, the coming companies will not hire coders for their products.

[0] https://www.economist.com/special-report/1999/10/07/cutting-...


> We will no longer think of code as something that exists, but things that just happen.

Which is very scary: we are modelling things after nature and ourselves with our own information: this makes LLMs sometimes eerie human and thus fallible. So when you say there will be no more software, I think you are saying that some future ai/AGI will perform the work without it coding that work. So instead of writing code xy it will just ‘do’ xy ‘in its head’ and use the answer for the rest of ‘the problem’ it is solving. That is going to be just as terrible as humans doing the same thing: they will, ever so often, make mistakes and if you see a ‘program’ as being 100000s of these decisions every run, this will be terrible.

Now it is possible we will come up with some AI that does not have this issue, but we don’t even have a plan of attack on how to create that; I am your age and I am going to say we are both dead before it happens. However, because a lot of companies like terrible ideas if they save money (support chat bots is an example), surely this domain will be tested with the short term generations of AIs.


We've had tools for generating code for a long time. They're mostly used for initial project setup and XML configured Java applications, and macro stuff at the fringes of software development.

If "AI" built an application, who is liable when there's damage?


Developers today are also not liable, otherwise we would have languages that focus on preventing bugs instead of power of expression.


> This is not how AI will work. It's not I want something to build, the AI will just do the things. There will no longer be things to "build". We will no longer think of code as something that exists, but things that just happen.

That’s a _huge_ assumption. There’s really, at this point, very little reason to think it will come to pass; if it does, it likely won’t be due to anything particularly similar to the current crop of generative AI stuff.


The devil is in the details, as always.

There's no downhill slope from where we are now to "AI does all the work." It's an uphill battle, just like automation at large, and it will never truly end.

Software has been eating the world, and AI will eat the world, but humans will shape how that happens, and there will always be new opportunities, new challenges, and new levels of abstraction.

AI will enable us to solve bigger problems. Picture beautiful cities integrated with nature: flower gardens, fruit trees, self-maintaining ecosystems. Picture spaceships capable of interstellar travel and asteroid mining. Picture advanced personalized AI doctors who can detect disease just by scanning your face and can cure you using AI-discovered techniques.

There's a huge amount of coding (AI assisted) to bring us to that world. I understand the nostalgia and fondness for memorizing syntax, but there are more exciting problems to solve, and we're on the verge of solving them. If programmers are problem solvers, then this is shaping up to be our most exciting era yet.


You aren't too old. I've been out of school for ~10 years and AI makes me feel exactly the way you've described it. I'm trying to navigate it and find an exciting space to carve out for myself, but the future feels unclear in a way it never has before. I'm confident in my own ability to build great things, but to your many points, AI has the unique potential to do everything better than me.

I don't want to say this makes me feel hopeless, because it genuinely doesn't. I'm working on a new startup, I'm starting a family, and I'm doing well.

I just want to build things that matter, that people find useful, and I've been having a hard time figuring out how I'll do that in a world with sufficiently advanced AI. Hobbies have never been enough for me, and never will be. It needs to be more than fun to energize me - I need to feel like I'm moving a needle.

For now, all I know is that I don't know. It feels nearly impossible to guess what the future holds.


The current generation of AI can't do the truly difficult things better than you: finding a good niche, making the decisions about how you want to differentiate, building relationships with your customers, choosing which innovations to pursue and when to build on the features you already have.

I run my own startup too, and I wish it could help me with those things. So far the most I can get from it is things like generating some HTML, and even then it gets it wrong half the time.

That will change in the future, but it will be gradual, and it will help you execute on ideas faster and better.


IF (a big if!) all your forecast will become true, I don't think what people will mourn most at the time is losing the fun part of coding/developing SW.

Like those laborers who went jobless after the waves of industrial evolution, they should have been planning earlier for other jobs and skills, rather than focusing solely on the fulfillment from making goods.


I do think coders will ;-)

But yes, like with the Waymo destruction and people booing AI SWSX and the recent study the UK will at least lose 8 million jobs to AI, the changes are hard to imagine.

Like watching Terminator where Sarah is running around searching for a public telephone.


Yep. I don't know what's left for coders to do when AI is doing the coding, but I wouldn't be interested.

A painter gets in it for the painting, not painting management or painting business strategy technical directorship or touching up photographs taken by the new machine.

PS: I feel this way and I'm way younger than you. ;p I don't think it's an age thing as much as a "we were sold the idea that we could code" thing.


> But AI will take away all coding.

I feel like the time frame for this is 20+ years out. It feels premature to mourn the end of coding, although we can sense it coming in the medium future.

It’s not too soon to start planning how society should handle the radical changes in employment due to AI. It seems like that is where our focus should be.


Imagine the 2038 problem being solved by computers themselves!

"I've upgraded your database schema for you, Dave. It was about to create problems for you, but you're safe now."


Star Trek has no database schema. Neither has HAL. That is the point.


> But AI will take away all coding If AI takes away all coding, who will program new AIs? And when you want to tell an AI what to make, you have to tell a computer in very precise language what you want it to do, otherwise you might have unintended effects, which sounds a lot like coding to me


I don’t think “AI will take all coding”, if not never, at least not in the next decades.

I think programming languages will get much closer to natural language, but the interesting, important, and hard part of coding will remain there.


Before computers, there was no programming and you didn’t worry. After computers there maybe something totally new and special and you’ll never think about “coding” again.

Maybe you’ll use AI to create entire new universes, simulated only perhaps but who knows how wild and interesting it can get.


Will people still derive pleasure from creating things if there is an AI that is much better at creating things than you?

Take game development for example, I enjoy writing games but would I enjoy it as much if any random person could create a game much better than anything I could make myself? I’m not sure. It would be like performing to an empty room. Without any audience, is there pleasure in creation?


The answer is yes, I'm a crappy wood worker compared to a master craftsman, I still love it.

I love growing tomatoes, even though I'm not a pro and people are better at it than me.

I surf even though there are people 50x better at it than me. I know that.

You don't do something to be the best, you do it because it's enjoyable.


Kurt Vonnegut, in a letter response to a high schooler asking for advice:

Here’s an assignment for tonight, and I hope Ms. Lockwood will flunk you if you don’t do it: Write a six line poem, about anything, but rhymed. Make it as good as you possibly can. But don’t tell anybody what you’re doing. Don’t show it or recite it to anybody, not even your girlfriend or parents or whatever, or Ms. Lockwood. OK? Tear it up into teeny-weeny pieces, and discard them... You will find that you have already been gloriously rewarded for your poem. You have experienced becoming, learned a lot more about what’s inside you, and you have made your soul grow.


Interestingly that sounds like something that would work better for introverts, but either way you've got to find some work to pay the bills, and that means providing something that other people perceive as valuable.


Doesn't the machine pay the bills in the future? Isn't that why we're building them?


Humans can't even agree on what it's best to create.

Try asking ChatGPT if you should become vegan, for example.

AI is a mediocre mashup of millions of ideas. It needs a driving force telling it what to be and how to react.

"Then there will be AIs trained to create certain things" -- and that's the point. People's ideals go deep, and achieving them is time-consuming and labor intensive, even with the help of AI. Someone, many someones, will need to be there guiding it every step of the way, through robotics, through politics, through religion. And that's a kind of programming.


I think you're completely wrong in almost every way possible.


> Now, imagine that AIs start replacing more and more of what I do, then I'll have more and more time to think of what I want built and not have to spend so much time on building it. Sounds brilliant to me.

Now imagine a person who just spent four years studying CS who's looking for a job after graduating college. This is what they're competing with.


I am close to that old at age, but a software developer for only about 6 years.

My perspective to using LLM at coding is that it make my job more enjoyable. I can quickly create a good enough boilerplate code to get me started, I can quickly create dummy data objects to test things on the frontend, when I am stuck I can check some ideas or even get a correct code suggestion to unstuck me.

It makes the mental load of coding much lighter, by aiding with the trivial stuff, so not removing the joy of figuring out the hard stuff.

I am old, but coding with AI seems a much easier hill to climb, as it comes with an escalator


There's a more fundamental issue at play here than AI replacing human skills and craft and the consequent aimlessness it can invoke.

When we build software, we have a number of diverse experiences. We get 'into the zone', our daily worries fade away and we fall into a sweet mental harmony. We feel the joy of architecting and building castles in the sky, we bask in the carefully honed simplicity and balance of the system we have created. This all takes place within ourselves and needs no observer. But there is another force at play, we look forward to the admiration of other people's gazes on our work. This admiration is rooted in their recognition of our special skills that few can demonstrate. Now though, the AI has diminished the uniqueness of that ability. The product of our work is no longer an admirable thing in the eyes of our imagined audience. If this was your main goal, then motivation drains away.


I see what you're saying and to me seems the most relevant aspect of the transition. This has happened before, vehicles go faster than people or horses and machines play chess better than any human, but we still have foot-races and chess is as popular as ever. People will always do things and compete and play, it will just not be for work and to pay for things we need. The road there may be very bumpy, but once there we will have a new currency for recognition and appreciation.

When it comes to art, machines won't replace humans because communicating the human condition is at the core of art. Some things categorized as art e.g. blockbuster movies may entirely be machine made and enjoyed for its novelty and spectacle and that's fine too.


I'm 10 years older than the author, and I spent a lot of my career wishing that I was 10 years younger -- mostly because of the lack of available resources in the CS and electronics fields when I was in my 20's. IMHO, the most significant contribution of the Internet to humanity has been the increased availability of information to all. I envy my children because they grew up with access to incredible resources that weren't available to me.

On the other hand, I was blessed to experience first-hand, incredible advances in technology. During my career, I've witnessed a roughly 10^3 improvement in speed, storage capacity, and signal processing algorithm efficiency.


I think what has happened is that you have come to terms with your own mortality. It isn't something which comes early in life, or even at all for some people. As time progresses a bit more you will realise it changes nothing, because nothing has changed! There are things we will never see, or experience, or do. That's life.


There is more to the article than this, it is only a part.


Yes


I think of this as what does it mean to be human and what craft do we enjoy - does knowing that chess engines can beat most of us take away the fun of playing chess with another person. Similarily does knowing that an factory can mass produce my furniture take away the pleasure of making a chair with my hand - there will be place for both as well as there will be new kinds of chairs to be made with AI.

Hitherto we have always thought of software as one for many - the aspects of adoption of software is left to humans.

Maybe we will shift more of our skills to higher level skills - how do we make more enjoyable products rather than how do we learn the nuance of programming languages, software paradigms and we may evolve software and technical paradigms around these.

I doubt this is the age of the death of building. We will just invent and build new things.

Generative Media by AI - while dystopian can also make us think what does media really means - will there be more value on in-person media rather than digital media?

Will it also mean we have a larger spectrum of artists who can now create - not just books and blogs but multi-media experiences that are immersive worlds/moments?


Did photography replace drawing? Of course not. Did instagram kill photography? No, but it made photography as a career choice a much more complicated decision.

It will probably be similar for AI and programming.


The problem is not pleasure or meaning of life. It is production, competition and economy. Always have been in the past centuries.

> Maybe we will shift more of our skills to higher level skills

Exactly, that's what happened with the second industrial revolution: in Europe and USA, it take form as the bloom of tertiary work being the first employer. But we are facing the limits (or consequences as we prefer to call it) of such growth: limits in energy, materials, environment. We'll face the same limits for AI.


Does mass production take away your pleasure of making a jet engine or smartphone by hand? A pleasure which never existed, you couldn’t do it by hand if you wanted to. Maybe we’re maxing out what teams of human scale coders can build, and when factory made code becomes a thing it will be so far ahead, so much bigger, that we will be left with the equivalent of the option of building a chair (yesteryear and basic) but not building a bike or car or plane or anything modern.


> does knowing that chess engines can beat most of us take away the fun of playing chess with another person.

It did for me: I tried it a few times but now that my phone can beat me every time, it feels like a total waste of time to me. It’s personal, but I definitely don’t enjoy it anymore, not against a human nor a computer.


I just don't see it.

I haven't been able to faithfully replace any of my work flow with AI tools.

The need to fact check them always ends up taking more time than just doing it by hand.


This is my experience.

Another aspect is that I don't want software development to be too fast, the result is better if there's some friction and time to mull over optimisations in architecture and overall design.


    I wrote assembler 68000 during the Amiga demo scene days
I did that too.

And I couldn't be more enthusiastic about the current AI developments.

I have been dreaming about an AI system which reads websites and predicts the next word ever since I got my hands on the web. And tried to predict if that could lead to strong AI.

I even built a somewhat related experiment as my first public web project. A system which asks users for their favorite 3 bands and then predicts the next favorite band of that person. And after talking to a lot of users, it turned out quite useful. That made me even more convinced, that strong self-learning AI would become a thing.

During the dotcom time, I was too inexperienced to fully grasp what is going on. I was doing my own thing, not enjoying the full picture as much as I could have.

But now, with the advent of LLMs, I enjoy experiencing this shift in technology so much! Watching this explosion from day 0. With a much broader horizon than last time.

I actually don't think that tinkering with LLMs themselves is the way to create the next Google.

Yahoo was putting content directly on the web which seemed like the obvious way to create "the" web company. A media company made for this new medium.

But the much bigger business model turned out to be to just leverage all the existing content.

My feeling is that this time, it will be to leverage the existing LLMs and APIs out there and tie them together to something useful.

So the Google of the AI era might turn out to be a good old Python program that makes and answers http requests.


I also get a bit of the same feeling (only i think i still have the energy to climb that hill). But i feel really worried for the junior devs, that didn’t have the time to build their skills yet. They’re going to be the first batch of jobs rendered obsolete by AI. Only harder problems will be left to solve, and only senior will be able to do it. Until even those problems get solved by AI of course.


What I find exceptionally frustrating is that whenever there is talk about artists, illustrators, and writers being replaced, the response from programmers is almost a universal "automation is good, everything will be cheaper, they weren't really creative anyway."

It's a psychedelic ride, watching humans construct AI. Unlike other automations, it automates faster and evolves faster than anything before, making it much harder for society to adapt. I often wonder if we've crossed the threshhold of a "speed limit of adaptation".

Even if we haven't, it's an important question to ask and yet no one, especially the technophiles, are asking it except in private or in very superficial ways. Personally, I think there's an underlying fear to the whole thing: everyone fears that AI replacement may be total, so everyone is trying to get ahead of the curve so that they become the one doing the replacing, instead of the one being replaced.


I believe we crossed the speed limit a few thousand years ago. Human evolution is so much slower than our technology has been since the agricultural revolution.


It is not the comparison of human evolution and technology that I was making, but rather a comparison between societal evolution and technology.


I don’t know how the future will look like, but for now LLMs just help me to do the job quicker and become unstuck in case of some issues like “how do I write that regex”. I believe developers performance will increase with these tools, but we will still be needed to properly get all the nuances of requirements and implementation.


I wrote CGI scripts in bash before it was hipster cool.

s/Old/Lazy/ because these are different properties.

There's no specific numeric old, it's all about health and fitness for "age" and adaptation for tech relevancy. What is a shame is when engineers get complacent. This can happen at any age.

AI is somewhat hype like all things that came before, but deep learning and LLMs are useful approximate automation effort multipliers and nothing more. The offerings are not (yet) instant mastery nor (yet) replacement for human experts. Long term, it's important to understand where and how to use AI as a legitimate assistant but not as substitute for learning, supervising, or thinking.


1) everything that’s already in the world when you’re born is just normal;

2) anything that gets invented between then and before you turn thirty is incredibly exciting and creative and with any luck you can make a career out of it;

3) anything that gets invented after you’re thirty is against the natural order of things and the beginning of the end of civilisation as we know it until it’s been around for about ten years when it gradually turns out to be alright really.

-- Douglas Adams ( https://douglasadams.com/dna/19990901-00-a.html )


Having started programming in my 30s, and having passed my half-life, I get the sentiment.

Even so, the current wave of AI makes me pay attention, but somehow does not faze me. If this is in fact a fire-like revolution in the means of production, what work looks like will change. And if work wants AI-made software, well we can have at it.

For now, I don't buy it. Maybe I'm too young, maybe I'm too wrong [1]. In any case, I know as long as computers are accessible to me, I will program for joy.

[1] https://www.evalapply.org/posts/halting-ai/


I am the same age and I am less optimistic about the advancement of AI to that extend. It’s a junior helper that doesn’t get tired at all and at some time in the future it may be more than that, but to eradicate my job, we are far off yet. I probably will be dead.

Whether it happens or not in my life time, I will always enjoy writing (z80 and other) code/games.


The moment AI can fully replace programmers, the world is going to change so much that it’s probably not worth thinking about. Gather ye rosebuds until we unleash the rosebud optimising bot army.


I am not that old, I started coding in the amd k5 days, I code between 8 to 16 hours a day for 25 years now, and I really love it, I think we have not even scratched the surface of what computing is.

We need so many programs, and so many systems, sometimes I get overwhelmed by the amount of code that has to be written, and the amount of hardware that has to be made, and yet the last 20 years most of tech's brain budget was spent on digital heroin and surveillance.

We still don't have cheap self cleaning public toilets (people literally shit themselves on the street in 2024), or AI mouse killer robots (costs 500-1000$ to catch a mouse with exterminator if you don't use inhumane traps), not to mention nanobots, biotech, cheap personal teachers or climate change/energy technology.

If you dont think you are too old like the OP, I say, buckle up, and start building for the future you want. You can start working on any field, or dig into any subject, you don't understand control theory or deep neural nets? Ask it, watch some lectures, ask it again and again.

We are on an inflection point.


This should have been a tweet. Buy his consulting services.


This is a poorly written article that is used as a springboard just so HN users can lament that the good old days of tech have passed.


AI is a lever, a force multipler... in a way its like the transition from coding in machine language to assembly to higher languages.


It is also a force multiplier in terms of extracting wealth and concentrating it into the hands of an elite few. Every technological force multiplier is also a societal effect multiplier.


Or it becomes the best peer study agent for young humans and jumps their IQ high enough to begin handling the problem of extreme wealth concentration.


I think it's well-established that IQ is not correlated with empathy, and those with the highest IQ are likely to take the most wealth. I mean, look at where the smartest people are now! Wisdom does not come with intelligence, and it fact the opposite may be true.

And if this discussion proves anything, it proves that we need to be cautious rather than flippant. We need to stop AI development, instead of waiting to see what happens.


We're going to make really damn good paintbrushes, not automate the creation of the art that people want.


Wow. So opposite to.me.

I'm nearly 60, and have been in tech 37 years. I wake up even more enthused every day.


I feel old, but maybe it's because I'm dumb.im much younger than the author and I haven't done anything with AI except some Copilot. In my mind, remote workers in lower cost markets, or younger people, or non-disabled people are much bigger threats to my job.

I don't think we'll do away with devs until AGI is at least as good as humans. You have to understand that data and problems, including asking questions about missed or incorrect requirements. Maybe we'll get there in a few more decades, but maybe not.


The AI fever feels a lot to me like all the other fevers, most recently in particular around Bitcoin and how its proponents claimed all sorts of things: it was going to be worth millions, and all stores would switch to bitcoin and the population would conduct business with bitcoin, it would take over the finance world and replace fiat currency, etc. etc. etc.

We had a few years of that insanity where if you somehow dared to commit blasphemy and say something otherwise, even something mundane like "yeah the tech is cool and has some cool/good uses, but I don't see any of your bitcoin singularity actually happening" you'd get downvoted to oblivion no matter where you were. Now all those downvoters, years later, when their singularity never happened have somehow become quiet and disappeared and we don't hear from them anymore.

It feels exactly the same with "AI" for me. Especially reading some of the comments in here ... jeez .. yeah it's a cool tech and has lots of cool/good uses, but do I think it's going to fall into somehow inventing strong AI, a concept which we don't even have a definition for, and start taking all our thinking jobs and go into AI singularity state? No, no I do not.

I wonder if in a few years all the people writing in here about how they fear they're about to lose their job and how AI is going to take over, will also fall silent and disappear when it doesn't happen, same as the bitcoiners did. They'll just move onto the next thing I guess.


This is going to sound maybe darkly humorous, but one of the reasons we haven't been able to make much use of AI is because the natural language content is itself nearly-universally incorrect, incoherent, or just plain gibberish. It's true even in published documentation - they are full of garbage.

The garbage that gets tossed back in the AI team's face is a very similar flavor to when I first started doing doc analytics but with older techniques (topic modelling, embedding, ontology, clustering, some xquery/jquery/mongo crap, good ol' regex). We saw things like single digit percentile commonality between configurations that were, on-paper (and in the logistics system!) identical, or landing gear classified as a fuel system component, or whole assemblies without fasteners.

What happened from that great revelation from back in the day?

Absolutely nothing. There was no funding to fix any of it, because dollars and charge codes came from new projects. The directive remained: just shut your mouth and keep churning XML for the latest gee-whiz kludge sold to some sorry USMC procurement department head. No one reads the damn thing, no one reviews it, and it'll take an officer dying from your product before anyone notices.

The absolutely terrifying take away I have from this is that some bright spark upstairs is going to realize they can generate garbage all day long so much more cheaply with generative AI, rather than taking a biiiiiiig step back and consider whether publishing garbage was a good idea in the first place. I'm going to put money on "Nope!", which I guess is the place to make money right now.


AI writes great code for people who memorized PIE while gaming a FAANG interview.

For those of us who aspire to cognition over memorization, I hope we can find a smarter pairs programmer than AI.

Fortunately, I've paired with a few, all of whom were vastly smarter than I'll ever be.

Your aspiration may arrive in the form of a hybrid:

A hybrid that takes form by mixing:

* a page from the DSL playbook: small pieces, loosely joined by organizational context,

* a page from the "code as comment" playbook where the best DSL is a real programming language that signifies not metaprogramming, but pseudocode, or the lack of comment

* a page from Knuth's literate programming book

I'm not sure what comes out the other end of the machine when you put those ideas together but it's likely beyond AI and python: perhaps lua written by Knuth.

What's really missing is a shorthand for the volley of human pairs programming, a gateway drug for human-computer sensemaking collaboration that enables us to go "volley: what you've got is good, but great would be more concise", essentially a tiny language of bidirectional interaction where most AI conversations go awry given their need for verbosity: exactly the thing wrong with PIE's dominance of the FAANG interview's emergent industry: becoming turbo tax's more subtle sister.

I'm not sure if there's an HCI lab studying pairs programming, but I only live a few miles from one of the best.

Take Szasz second sin and rewrite it for software.


> This hill is for others to climb. Perhaps it is their first hill.

> This feeling is new to me. I was eager for every challenge and every change in our industry.

You should not stop at this initial analysis. There is likely something deeper here: AI is a fundamentally different sort of innovation compared to other innovations like object-oriented programming. And we can see this on multiple levels:

Level 1, The Personal: AI is the first innovation to target creative tasks directly, thereby taking away moments of discovery, of figuring things out for yourself. It takes away the moment where all you've got is some basic documentation and you're trying to figure out how things work.

Level 2, Business and societal: AI was invented in a very different time than the transition from assembly to C, etc. It was invented in a time of global capitalism, where the goal is to push wealth and resource extraction to its logical extreme. The motivation of those who invented it are those who have a God complex and want to replace all of human labour with as much technology as possible. It is more business-motivated than human- or curiosity-motivated.

In short, AI is not a good thing. In this "intermediate" stage, programmers are giddy with AI because it is a fascinating tool no doubt, but the endgame is not to foster a greater environment of sharing. The endgame is to get programmers to make it so good that communities of people doing things together are destroyed completely for the sake of total control of society by big tech companies.

You might think you are old...but I am in by 30s and I see the exact same thing. AI is the perfect example of the law of diminishing returns that marks the intersection of two curves: the descending curve of human creativity and the ascending curve of technology as a tool for total subjugation and destruction community that you once loved so much.


Don't get discouraged too soon. There's still a long way to go until we reach 'models do everything and make everyone irrelevant'.

Yes, there will eventually be just one application which can 'do everything', but somebody has to build that.

Humans must still review the generated code and figure out the 'business logic' - eg. what they (humans) want from the app, even though the code typing will be largely automated.

Also, for the AI to develop to those levels, society and the economy as a whole needs to keep being relatively stable, which is not guaranteed, given challenges like political instability, possibility of war, climate..

But yeah, things are changing fast for us devs (and many more professions out there), AI makes solving problems easier and cheaper, so demand for problem solvers will inevitably go down. On the bright side, those same people who are not in demand also have access to these tools, so they can use it to create their own thing with less effort.

As for getting old, yeah, I think about this more than I used to.

I dedicate a big amount of time to staying in shape and doing physical activities, going into nature, using my body.

I feel like I've wasted decades of beauty while typing all that code and not being in nature.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: