This is a pretty cynical take, but I would think that having AI management would be highly undesirable for companies, and not because it would be bad at managing.
Even in good, reputable companies, there is a certain amount of legally/ethically dubious behavior that is nonetheless desirable.
An H1B candidate for a position has been found, but it must be demonstrated that there is no local candidate for that position. Every local candidate must fail the interview, whether or not that is fair.
You have a small team. You've hired someone good at their job, but over lunch, they've mentioned they plan to have 10 children, so they will be on parental and FMLA leave for 3+ months a year indefinitely. You need to find a problem with this person's performance.
You have a team of developers. One of them has done a great job this past year, but the project they are working on and their specialization is no longer needed. It would not be fair to them to give them a middling performance review, but it's in the company's interest that the limited compensation budget goes towards retaining someone with skills aligned to the future direction.
An AI would have any unethical or illegal prompting exposed for any court to examine. Likewise, there would be little reason not to maintain a complete record of everything the management AI is told or does. One could design an AI that leadership talks to off the record, which then manifests its instructions in its state, and then could lie (or be unable to prove) its instructions later. That would then be similar to a human manager.
But I don't think any court would accept such an off the record lying AI. So an AI probably can't keep any secrets, can't lie for the company's benefit in depositions or court, and can't take the fall for leadership.
You know… all the things you mention are actually bad. I want them to stop, for the sake of our society. If the price for that is getting rid of human managers with a broken moral compass such as yours, I’m all for it.
You don't think AI's can't be trained to lie? Odd, given a major research area right now is to prevent AI from lying. They do it so confidently now nobody can tell.
They’re probably the only ones it makes sense to keep on. You have a couple of grunts code reviewing the equivalent of 10 devs of work from AI and a manager to keep them going.
If they're replacing all of their staff with AI, why do they need so many middle managers to manage staff that no longer exist at the company?
It often seems that AI 'will replace middle managers' though it would be more likely that middle managers would be made redundant, given a lack of people to 'manage'.
Because they have lower say-do ratio than employees below them. There's a sign or exponent error in current reward system of modern societies somewhere.
avidiax|1 year ago
Even in good, reputable companies, there is a certain amount of legally/ethically dubious behavior that is nonetheless desirable.
An H1B candidate for a position has been found, but it must be demonstrated that there is no local candidate for that position. Every local candidate must fail the interview, whether or not that is fair.
You have a small team. You've hired someone good at their job, but over lunch, they've mentioned they plan to have 10 children, so they will be on parental and FMLA leave for 3+ months a year indefinitely. You need to find a problem with this person's performance.
You have a team of developers. One of them has done a great job this past year, but the project they are working on and their specialization is no longer needed. It would not be fair to them to give them a middling performance review, but it's in the company's interest that the limited compensation budget goes towards retaining someone with skills aligned to the future direction.
An AI would have any unethical or illegal prompting exposed for any court to examine. Likewise, there would be little reason not to maintain a complete record of everything the management AI is told or does. One could design an AI that leadership talks to off the record, which then manifests its instructions in its state, and then could lie (or be unable to prove) its instructions later. That would then be similar to a human manager.
But I don't think any court would accept such an off the record lying AI. So an AI probably can't keep any secrets, can't lie for the company's benefit in depositions or court, and can't take the fall for leadership.
9dev|1 year ago
rstuart4133|1 year ago
dyauspitr|1 year ago
misswaterfairy|1 year ago
It often seems that AI 'will replace middle managers' though it would be more likely that middle managers would be made redundant, given a lack of people to 'manage'.
numpad0|1 year ago