As evident by most to all the "AI-hiring platforms", it's not about solving a problem successfully, but using the latest moniker/term/sticker to appear as if you solve the problem successfully.
In reality, neither the client nor the user base have access to the ground truth of these "AI system"s to determine actual reliability and efficiency.
That's not to say there aren't some genuine ML/AGI companies like DeepMind (which solve specific narrow problems with quite high confidently), but most of the "AI" companies feel like they are coming from Crypto and are now selling little more than vaporware in the AI gold rush.
I always find this to be a false dichotomy. I'm not sure what use cases are a good fit for generative AI models to tackle without human supervision. But there are clearly many tasks where the combination of generative AI with human direction is a big productivity boon.
"Making fewer mistakes" implies that there's a framework within which the agent operates where its performance can be quickly judged as correct or incorrect. But, computers have already automated many tasks and roles in companies where this description applies; and competitive companies now remain capitalistically competitive not because they have stronger automation of boolean jobs, but because they're better configured to leverage human creativity in tasks and roles performance in which cannot be quickly judged as correct or incorrect.
Apple is the world's most valuable company, and many would attribute a strong part of their success to Jobs' legacy of high-quality decision-making. But anyone who has worked in a large company understands that there's no way Apple can so consistently produce their wide range of highly integrated, high quality products with only a top-down mandate from one person; especially a dead one. It takes thousands of people, the right people, given the right level of authority, making high-quality high-creativity decisions. It also, obviously, takes the daily process, an awe-inspiring global supply chain, automation systems, and these are areas that computers, and now AI, can have a high impact in. But that automation is a commodity now. Samsung has access to that same automation, and they make fridges and TVs; so why aren't they worth almost four trillion dollars?
AI doesn't replace humans; it, like computers more generally before it, brings the process cost of the inhuman things it can automate to zero. When that cost is zero, AI cannot be a differentiating factor between two businesses. The differentiating factors, instead, become the capital the businesses already have to deploy (favoring of established players), and the humans who interact with the AI, interpreting and when necessary executing on its decisions.
Those business people still can't quantify what their skilled workers do for then though, so they hastily conclude the AI is a suitable or even improved replacement.
“A computer can never be held accountable. Therefore, a computer must never make a management decision.”
There are lots of bullshit jobs that we could automate away, AI or no. This is far from a new problem. Our current "AI" solutions promise to do it cheaper, but detecting and dealing with "hallucinations" is turning out to be more expensive than anticipated and it's not at all clear to me that this will be the silver bullet that the likes of Sam Altman claims it will be.
Even if the AI solution makes fewer mistakes, the magnitude of those mistakes matter. The human might make transcription errors with patient data or other annoying but fixable clerical errors, while the AI may be perfect with transcription but make completely sensible sounding but ultimately nonsense diagnosis, with dangerous consequences.
1953 IBM also thought that "there is a world market for maybe five computers," so I am not sure their management views are relevant this many decades later.
Philosophically the point still stands. If you delegate your management decisions to a computer and someone dies you can't put the computer in jail for murder. Ultimately a person must be responsible and that means you can't fully automate it unless you have perfect trust in the machine.
It is only irrelevant in the degree to which companies have been able to skirt laws and literally get away with murder.
"If an an artificial person can do a job and make fewer mistakes than a real person, why not?"
Is the question everyone in business is asking.