This is a nice way of putting it. It just shows how people have a tendency to deny a reality that may replace everything they stand for.
What you will see as AI continues to get better along the obvious trendline is these "denial" arguments become more and more specific. First the AI will "never" replace jobs, then it will "never" replace specific jobs involving "programming skills" or "skills related to what I do" until eventually when the reality of it all is too all encompassing the arguments will evolve into attacks at AI for being "low quality" or something along those lines.
But if AI progresses to the point where the quality of the output becomes undeniably superior to human output the "denial" arguments will inevitably shift to the REAL argument. The heart of it all. What is the purpose of being alive if AI is doing everything? Should we ban it for the sake of the economy for purpose?
These arguments of course only occur based off the assumption that the technology will progress so quickly that the repercussions will be hitting everyone like a freight train. If the progression slows down enough then there won't be much opposition. Only acceptance as it slowly assimilates into our society without people noticing the change.
If you in your heart are one of these people who has no worry about AI taking over jobs because AI is simply too "stupid" perhaps consider the fact that my description above fits you. Are your arguments evolving along a similar trendline? If so, consider shifting your perspective a bit.
Comments like this always read as "we can already extrapolate that everything we do will be done better by a machine soon". Pushing back against this argument isn't just ignorance or avoidance of change. It just asks the relevant question of whether we can be so sure that AI does everything "better". But how dare we challenge the hubris of tech bros, right?
Naw there are many possible futures. There is nothing saying the trendline is absolute. However...
The future predicted via extrapolation of a trendline is unfortunately more probable and more realistic then a future predicted via mistrust of AI.
Artists have already formed lawsuits against companies that own LLMs, this one in the article already involves an artist complaining about his job being more or less replaced.
You have to be next level delusional not to consider the extrapolation to programming.
Oh, I do believe programming as a profession is at risk and will change a lot, if not rendered obsolete. What I'm talking about is this idea of "just get used to the fact that there is no human skill that won't be replicable by AI in 2-10 years". It's a very bleak view of the future and our own biological complexity. We need to remember that we are the ones inventing the AI in the first place. We are limited by our imperfect ability to understand ourselves. It will get better, sure, there will be emergent properties, but there's no need to reject the inherent value of humanity even if it happens to produce less economically viable output.
Not everything will be replaced, but you can extrapolate that much of what we do will be replaced.
The thing that is harder to replace is the versatility of the human form. Manual labor can't fully be replaced because robotics have yet to catch up.
>there's no need to reject the inherent value of humanity
There's no fundamental rejection here. Capitalism simply selects the most efficient methodology. If humans arent the most efficient methodology for a given task then capitalism eliminated that methodology. That's the logical extrapolation. You subjective opinions on humanities worth is irrelevant to the most likely outcome.
Capitalism and its value system is subjective, too. It's not set in stone. I believe we can still steer away from profit as the sole driver of, well, everything, if we want to.
Historically speaking, from crypto to AI, the market constantly evolves towards the next most profitable thing.
Only regulatory systems like the government has the tendency to temper such things (see the Fed and rising interest rates). However do note that capitalist entities have infiltrated the government and have huge sway over it's regulatory policies meaning that anti-business regulatory policies are unlikely to occur.
All of this just means that my conclusions are most likely going to play out. Barring some event that will cause intense negative public reaction.
> However do note that capitalist entities have infiltrated the government and have huge sway over it's regulatory policies meaning that anti-business regulatory policies are unlikely to occur.
Maybe this is the case in the US, the EU is known for being much more strict in its regulations, which is often ridiculed by the rest of the world. Those same people are going to hope for regulations once we see the effects of the current Wild West that is AI.
Agreed. The EU is ridiculed but it's also one of the happiest places to live. The relationship between the US and EU is almost parasitic with the EU simply feeding off most of the business innovation coming from the US.
I don't think the world needs this much constant innovation. Additionally the innovation itself can be disruptive. The US will doggedly pursue profits even if those profits involve technology that can cause the US to eat itself. If AI replaces all jobs and nobody has any money to buy stuff who will the companies sell shit too?
This is a nice way of putting it. It just shows how people have a tendency to deny a reality that may replace everything they stand for.
What you will see as AI continues to get better along the obvious trendline is these "denial" arguments become more and more specific. First the AI will "never" replace jobs, then it will "never" replace specific jobs involving "programming skills" or "skills related to what I do" until eventually when the reality of it all is too all encompassing the arguments will evolve into attacks at AI for being "low quality" or something along those lines.
But if AI progresses to the point where the quality of the output becomes undeniably superior to human output the "denial" arguments will inevitably shift to the REAL argument. The heart of it all. What is the purpose of being alive if AI is doing everything? Should we ban it for the sake of the economy for purpose?
These arguments of course only occur based off the assumption that the technology will progress so quickly that the repercussions will be hitting everyone like a freight train. If the progression slows down enough then there won't be much opposition. Only acceptance as it slowly assimilates into our society without people noticing the change.
If you in your heart are one of these people who has no worry about AI taking over jobs because AI is simply too "stupid" perhaps consider the fact that my description above fits you. Are your arguments evolving along a similar trendline? If so, consider shifting your perspective a bit.