This is a pretty cynical take, but I would think that having AI management would be highly undesirable for companies, and not because it would be bad at managing.
Even in good, reputable companies, there is a certain amount of legally/ethically dubious behavior that is nonetheless desirable.
An H1B candidate for a position has been found, but it must be demonstrated that there is no local candidate for that position. Every local candidate must fail the interview, whether or not that is fair.
You have a small team. You've hired someone good at their job, but over lunch, they've mentioned they plan to have 10 children, so they will be on parental and FMLA leave for 3+ months a year indefinitely. You need to find a problem with this person's performance.
You have a team of developers. One of them has done a great job this past year, but the project they are working on and their specialization is no longer needed. It would not be fair to them to give them a middling performance review, but it's in the company's interest that the limited compensation budget goes towards retaining someone with skills aligned to the future direction.
An AI would have any unethical or illegal prompting exposed for any court to examine. Likewise, there would be little reason not to maintain a complete record of everything the management AI is told or does. One could design an AI that leadership talks to off the record, which then manifests its instructions in its state, and then could lie (or be unable to prove) its instructions later. That would then be similar to a human manager.
But I don't think any court would accept such an off the record lying AI. So an AI probably can't keep any secrets, can't lie for the company's benefit in depositions or court, and can't take the fall for leadership.
You know… all the things you mention are actually bad. I want them to stop, for the sake of our society. If the price for that is getting rid of human managers with a broken moral compass such as yours, I’m all for it.
Here's the thing. You assert confidently that GP is acting on a "broken moral compass". But you can also make the case that it is moral to act in interest of the company: After all, if the company fails, a potentially large number of people are at risk of losing their household income (and, in broken economical systems, also stuff like health insurance).
That's just the slippery slope of neoliberalism. The ends do not justify the means, no matter how you spin them: A company will not fail if you continue to employ parents of many children, employ a regional candidate, or write fair performance reviews regardless of strategic goals. If any of these cases appears to lead to the downfall of a corporation, there were much bigger problems looming that actually caused the failing in the first place.
A company is literally a group of people working towards the same goal. The people are just as important as the goal itself, it's not a company otherwise.
Why are you switching between corporations and companies as if they're the same?
I actually do know of a small company that was quite badly screwed over by a vindictive employee who hated her boss, deliberately did not quit because she knew she was about to have another child, got pregnant and then disappeared for a year. Local law makes her unfireable almost regardless of reason (including not actually doing any work), and then gives her three months maternity leave too. So she basically just didn't work for a year. She said specifically she did that to get back at her boss, she didn't care about the company or its employees at all.
For a company of that size something like that can put it in serious financial jeopardy, as they're now responsible for paying a year's salary for someone who isn't there. Also they can't hire a replacement because the law also guarantees the job is still there after maternity leave ends.
> If any of these cases appears to lead to the downfall of a corporation, there were much bigger problems looming that actually caused the failing in the first place.
This kind of thinking has caused ruin throughout history. Companies - regardless of size - aren't actually piñatas you can treat like an unlimited cash machine. Every such law pushes a few more small companies over the edge every year, and then not only does everyone depending on that company lose, but it never gets the chance to grow into a big corporation at all.
Where did this happen? Typically the government covers some or all of the parental leave costs where it is mandated, and while a company can't fire her they are allowed to hire someone to do the job in the meantime with the money they would have paid her. It's obviously not ideal but it's hard to imagine it is screwing the company over all THAT badly.
No, it's desirable for them to become profitable and successful again, especially if the only reason they're unprofitable is people abusing the rules to extract capital from them unsustainably.
Sure they do. Unions, abuse of other worker rights laws and voting in socialist parties that raise corporate tax rates to unsustainable levels are all exactly that, and have a long history of extracting so much the companies or even entire economies fail. Argentina is an extreme example of this over the past 100 years but obviously there are many others.
You don't think AI's can't be trained to lie? Odd, given a major research area right now is to prevent AI from lying. They do it so confidently now nobody can tell.
I don't think that an AI would be interrogated in court.
I think that it would be hard to hide all the inputs and outputs of the AI from scrutiny by the court and then have the company or senior leadership be held accountable for them.
Even if you had a retention policy for the inputs and outputs, the AI would be made available to the plaintiff and the company would be asked to provide inputs that produce the observed actions of the AI. If they can't do that without telling the AI to do illegal things, it would probably result in a negative finding.
----
Having thought a bit more, I think the model that we'd actually see in practice at first is that the AI assists management with certain tasks, and the tasks themselves are not morally charged.
So the manager might ask the AI to produce performance reviews for all employees, basing them on observable performance metrics, and additionally for each employee, come up with a rationale for both promoting them and dismissing them.
The morally dubious choices are then performed by a human, that reviews the AI output and collects or discards it as the situation requires.
They’re probably the only ones it makes sense to keep on. You have a couple of grunts code reviewing the equivalent of 10 devs of work from AI and a manager to keep them going.
If they're replacing all of their staff with AI, why do they need so many middle managers to manage staff that no longer exist at the company?
It often seems that AI 'will replace middle managers' though it would be more likely that middle managers would be made redundant, given a lack of people to 'manage'.
Because they have lower say-do ratio than employees below them. There's a sign or exponent error in current reward system of modern societies somewhere.