It honestly borders on psychopathic the way engineers are treating humans in this context.
People talking like this also, in the back of their minds like to think they'll be OK. They're smart enough to be still needed. They're a human, but they'll be OK even while working to make genAI out perform them at their own work.
I wonder how they'll feel about their own hubris when they struggle to feed their family.
The US can barely make healthcare work without disgusting consequences for the sick. I wonder what mass unemployment looks like.
For the moment the displacement is asymmetrical; AI replacing employees, but not AI replacing consumers. If AI causes mass unemployment, the pool of consumers (profit to companies) will shrink. I wonder what the ripple effects of that will be.
It honestly borders on midwit to constantly introduce a false dichotomy of AI vs humans. It's just stupid base animal logic.
There is absolutely no reason a programmer should expect to write code as they do now forever, just as ASM experts had to move on. And there's no reason (no precedent and no indicators) to expect that a well-educated, even-moderately-experienced technologist will suddenly find themselves without a way to feed their family - unless they stubbornly refuse to reskill or change their workflows.
I do believe the days of "everyone makes 100k+" are nearly over, and we're headed towards a severely bimodal distribution, but I do not see how, for the next 10-15 years at least, we can't all become productive building the tools that will obviate our own jobs while we do them - and get comfortably retired in the mean time.
I don't see it. Don't you have a 401k or EU style pension? Aren't you saving some money? If not, why are you in software? I don't make as much as I thought I might, but I make enough to consider the possibility of surviving a career change.
Even if one refuses to move on from software dev to something like AI deployer or AI validator or AI steerer, there might be a need.
If innovation ceases, then AI is king - push existing knowledge into your dataset, train, and exploit.
If innovation continues, there's always a gap. It takes time for a new thing to be made public "enough" for it to be ingested and synthesized. Who does this? Who finds the new knowledge?
Who creates the direction and asks the questions? Who determines what to build in the first place? Who synthesizes the daily experience of everyone around them to decide what tool needs to exist to make our lives easier? Maybe I'm grasping at straws here, but the world in which all scientific discovery, synthesis, direction and vision setting, etc, is determined by AI seems really far away when we talk about code generation and symbolic math manipulation.
These tools are self driving cars, and we're drivers of the software fleet. We need to embrace the fact that we might end up watching 10 cars self operate rather than driving one car, or maybe we're just setting destinations, but there simply isn't an absolutist zero sum game here unless all one thinks about is keeping the car on the road.
AND even if there were, repeating doom and feeling helpless is the last thing you want. Maybe it's not good truth that we can all adapt and should try, but it's certainly good policy.
> Maybe it's not good truth that we can all adapt and should try, but it's certainly good policy.
Are you a politician? That's fantastic neoliberal policy, "alternativlos" even, you can pretend that everybody can adapt the same way you told victims of your globalization policies "learn how to code". We still need at least a few people for this "direction and vision setting", so it would just be naive doomerism to feel pessimistic about AGI. General intelligence doesn't talk about jobs in general, what an absurd idea!
Making people feel hopeless is the last thing you want, especially when it's true, especially if you don't want them to fight for the dignity you will otherwise deny them once they become economically unviable human beings.
I think you jumped way past the information I shared. I don't think it's productive to lament, I think it's productive to find a way to change or take advantage of changes, vs fighting them - and that has nothing to do with globalization or economics or whatever, I'm thinking only about my own career.
I’m not sure I understand the point about learning. But wouldn’t any job that is largely text based at increased risk? I don’t think software development will be anywhere the last occupation to be severely impacted by AI
People talking like this also, in the back of their minds like to think they'll be OK. They're smart enough to be still needed. They're a human, but they'll be OK even while working to make genAI out perform them at their own work.
I wonder how they'll feel about their own hubris when they struggle to feed their family.
The US can barely make healthcare work without disgusting consequences for the sick. I wonder what mass unemployment looks like.