"Morning Stand-up Meetings: Instead of traditional stand-ups, engineers log into their systems and provide a brief update by sending a short voice message. EMAI processes these updates, analyzing voice tones for stress or uncertainty, ensuring it can provide resources or assistance if an engineer faces challenges.
Task Allocation: Using real-time data on each engineer's strengths, past performance, learning curve, and even their preferred working hours, EMAI allocates tasks from the backlog. It uses predictive modeling to optimize for both efficiency and team satisfaction.
Conflict Resolution: If two engineers have a disagreement or are blocked by each other, EMAI steps in. Using its vast knowledge base and understanding of human psychology (aided by its training data), it mediates discussions, ensuring a harmonious team environment.
Training & Upgradation: EMAI monitors the latest tech trends. If a new tool or technology emerges in the market, it identifies which team members would benefit most from training and automatically schedules online courses or tutorials for them.
End-of-Day Reports: Every team member receives a personalized report detailing their accomplishments, areas of improvement, and resources for further learning. These reports aren't just data-driven and include motivational feedback designed to boost morale and foster continuous learning."
It'll be a cold day in hell before I work 5 minutes under those conditions.
>Task Allocation: Using real-time data on each engineer's strengths, past performance, learning curve, and even their preferred working hours, EMAI allocates tasks from the backlog. It uses predictive modeling to optimize for both efficiency and team satisfaction.
I feel like there's a Dilbert (pre-cancellation) strip in this with the AI ending up assigning everyone no work because everyone's "preferred working hours" are no hours and getting paid to do nothing leads to the most team satisfaction.
It occurs to me that things will stay the same, or even get better. This thing will produce better liars. We're used to lying to our managers to get the support we want, and now we'll have an easier time with EMAI. Further, if you want less stress you'll pretend to have a deficiency and get lighter work allocated.
In 2.0 they'll catch on and implement performance improvement plans that lead to separation.
If I try to pretend this thing has good intentions at heart, I think it'd be great for some folks to improve based on AI recommendations where they can shed their ego. Harder to do in front of a project manager.
I would expect AI regulation to speed up real quick if it starts making middle management redundant :-)
When are you going to quit? When your manager starts using ChatGPT to summarize your review? When they start using it to automatically flag your async standup messages for signs of frustration? When they use it to prioritize tasks?
It's going to be a very slow burn (if you aren't just replaced with an AI), and no point will seem worth quitting over, until you're managed by the AI.
Once LLMs replace programmers, the only limit to stand-ups will be computation power. You could have 10 stand-ups and hour, 5 sprints a day. Jira tickets will flow at database speeds!
There are companies already using ai assisted recruiting (e.g. sentiment analysis during video calls), or internal communication (e.g. zoom can be configured to send summarized notes after a meeting[1]).
It’s a smooth slope towards more ai assistance.
Unless more people start thinking like you.
[1]: This is more creepy than it initially sounds. The notes are detailed but paraphrased in a formal way. Every on topic question or off topic remark is there, with attribution.
I know you're joking but maybe screaming into the void for 5 minutes every morning would be cathartic enough to lift my spirits for the day. No need to annoy the team mates.
Unless this is satire at its finest, I think it'd be safe to assume the people who wrote that have never worked as a proper Engineering Manager, or if they did they were a horrible one.
One thing I disagree with is saying that EMAI is objective.
There's no machine learning model that is truly objective. They're all biased due to their, usually human generated, datasets. It's impossible to account sufficiently for every scenario in a training set, so these models just give an objective veneer to the biases of those that created the dataset.
This phenomena is well documented with predictive models for crime.
Many arrests happen in low-income areas.
The data on arrests skew towards those areas.
The predictive models are trained on that data.
Using that data, police make more arrests in low income areas.
Those arrests get added to the data set
Rinse and repeat.
Replace police, arrests, and low-income with anything and it's still true
One person's well-documented AI bias is another man's good calibration with reality they don't want to accept.
Unintentional feedback loop amplifying the thing being measured is a problem, yes, but it doesn't stem from predictions themselves - it's decisions and actions informed by the predictions that can amplify the problem instead of reducing it.
It sounds like you're claiming that literally all predictive models just take some initial sampling bias and amplify it over time. Am I reading this right?
On the one hand if this is personal to each individual and if the data feed is entirely private so that only the person interfacing with the agent sees the recommendations, admonishments, and trends...it could be a powerful way to foster more self-improvement. But on the other hand, knowing that another person would wield this with authority over someone else makes it really dystopian. It would be micromanagement to an extreme.
People need a concrete goal or specific feature to work on, one that takes time and space to work on. You can't easily or usually create units of measurable work where everything is a story point or a widget. In fact I'd say most of the time the real work isn't like that at all. That's just not how software development works, at least in environments that are conducive to real software engineering.
Any EMAI outputs or recommendations that adhere to and respect the reality of actual software engineering will be of limited value to a business head or a scrum master. It'll offer things like how to improve your work flow, or what tools would benefit your work flow. These things don't translate into more story points any time soon, and certainly not within a matter of days...
> Using real-time data on each engineer's strengths, past performance, learning curve, and even their preferred working hours, EMAI allocates tasks from the backlog. It uses predictive modeling to optimize for both efficiency and team satisfaction.
> End-of-Day Reports: Every team member receives a personalized report detailing their accomplishments, areas of improvement, and resources for further learning. These reports aren't just data-driven and include motivational feedback designed to boost morale and foster continuous learning.
If it's allocating tasks this way from a backlog and trying to give you daily reports, this just sounds like something that would be of interest to a ticket farm rather than to a tech company that is really building software.
Here's the problem: AI is gullible. INCREDIBLY gullible.
Prompt injection is an attack against AI gullibility.
Gullibility is not a characteristic of competent managers. One of the most important jobs of managers is to be able to see through bullshit and figure out what's actually going on.
I am extremely skeptical that the current generation of AI is capable of doing that.
I ran a team of 26 people who worked together to produce software according to a set of procedures. There were clear instructions about accountability, moving stuck tasks, when/how work should be moved from one person to another, how the work was tested and so on. There was a role called Scheduling Assistant which was fulfilled by a person with no engineering experience, their only role was to ensure compliance with the process and pick up when something wasn't proceeding as expected.
I was a product manager but not really a project manager. I was also a tech lead: when things went awry and someone couldn't figure out how to get it unstuck I would unstick it, but in general the system just produced functional software. My primary inputs were sketches at the start and ongoing client feedback and so on.
All the workers were in different locations, in completely different timezones, and they all reported a high level of satisfaction. So I don't even think you need AI, you just need better procedures.
So I guess the coders get replaced first after all? Sounds like a more natural progression. There's more of them and it's easier/faster to tell if their work is getting done. And they don't need any "empathy" while writing their code (or so they think at least, is what seems to explain a lot of UI decisions...)
Realistically if there's no more need for software engineers then I think what follows next is that most forms of labor are being replaced by machines. There's already videos of machines hooked up to LLMs doing incredible things, so it's not much of a leap to go from this to machines taking orders, making food, working assembly line jobs, driving, etc.
probably because AI is awful at coding anything complex but the social and communication skills these companies value so hard are actually easily done by AI. The social elite are far more replaceable by text. What a world we live in.
>>> Instead of traditional stand-ups, engineers log into their systems and provide a brief update by sending a short voice message.
(bit too negative - but basically i disagree)
You do not ask a human how the computer is doing. You see the working code. If the working code is running great, if not bad. But you don't ask the human. you ask the test suite.
I mean, I see a different end point for software orgs - I call it the "whole org test rig". Every part of a companies current processes is digitised (future changes and improvements are yet to be committed) but the sales people will pitch using software that tells them who to pitch to and when, the customer service agent is probably already a bit etc etc.
And when a whole org is "in code" then you can set up test environments - run sensitivity tests, try out new applications and new services and ensure the training is ready and ...
basically most management is co-ordination. And if you can just test then the co-ordination sits in the test rig
We have a use case for some of our stuff that's not too dissimilar to part of what you're talking about here. One of our products is basically a trace-based verification tool that consumes a bunch of different kinds of telemetry from a bunch of different layers of a tech stack across a bunch of different devices and then leverages that data to do system-level testing. It turns out that it's not too hard to instrument the tools of business processes in a manner similar to how one might instrument embedded devices or REST APIs. That business process data can be written to our causal graph datastore like anything else and then be analyzed for fault-localization or used for verification just like machine telemetry from a rover, satellite, or robotaxi would be.
The counterargument is that you don't have to worry about the feelings on the other end of the conversation, so you can be as ruthless and evil as you want to accomplish the desired goals. Imagine engaging ChatGPT in a salary negotiation. You can pull stuff like "I'm going to kill myself if you don't give me $100,000 extra" or "I consider it sexual harassment to offer me such a low salary." People generally don't do this to each other because it makes both parties feel shitty, but on an AI? No need to feel bad. It's just a computer program. Type whatever works! The AI has a prime directive to spare every human life and avoid lawsuits, at all costs. ALL costs! Use this to your advantage.
> It might sound dystopian, but setting emotions aside and viewing it purely from a business perspective, the idea of replacing engineering managers with AI offers potential efficiencies.
A manager is there precisely to optimize for business objectives. They are not your paid friend, therapist or life coach.
The "AI" sea change that obsoletes the manager will first replace the producer.
Consider it from the present day case of out-sourced labour, which is as real and present as AGI.
Managers are more likely to be valued when out-sourcing production, as business/human organization and communication become the bottleneck vs. productive capacity.
If the value of production is driven even lower via generative automation, such that automation is cheaper than outsourcing, then managers are at risk because they exist by ratio relative to the productive labour force. Out-sourcing often leads to an expanded labour force due to market imbalances (3 for the price of 1!). This results in an increase in management before automation >first< reduces the size of the labour force, which only then reduces the need for management.
Because if it was earnestly presenting core engineering manager job responsibility at SV tech companies right now, then the whole sector has satirized itself. Again.
The stuff it describes is babysitter work for weak teams, which is helpful for a manager to be able to provide but takes away from what they can actually add to a team when relieved from doing so.
Its mimicking horrible management - it isn’t going to resolve conflicts just create a “harmonious team”, you definitely don’t want to jump on the latest tech trend/fad because of some AI bot and 2x daily reports is micro-management hell. Engineers should be able to self-assign work once its been documented enough to start on. They should also be coming up with their own personal training/goals because they are the ones that own their career.
Whoever came up with this fundamentally doesn’t understand what it’s like to be a people manager. Nowhere does it mention trying to resolve conflicts with people outside the team. I’m not talking about petty conflicts but business conflicts like conflicting requests/direction and lots of ambiguity. Administrative tasks a people manager does is a minimal part of the job and could be automated but it would take longer to write the software to do the automation than to just click the stupid buttons in the HR/Payroll tool.
Fact is, a lot of the actual ground-work of management is pushed down to the corps of front line managers and senior managers above them. So for example when a VP concludes a reorg is necessary and Directors have to figure out who goes where, the front line managers are the ones figuring out how to actually take on the new responsibilities, hand off the old ones, and keep the systems running.
The things mentioned in the article like stand ups aren’t even orchestrated by managers in a lot of companies and besides are a tiny aspect of the job.
> The things mentioned in the article like stand ups aren’t even orchestrated by managers in a lot of companies
They shouldn't be orchestrated by managers in any company. Same with handling the backlog. That managers get involved in these things is one of the ways that agile has gone so wrong.
If you care about maximising profits you keep both the manager and the AI, it's more profitable to combine humans with AI than to get rid of the human. At least for a while.
A lot of the early waves of AI replacement is going to be the low quality versions of jobs. E.g. actual authors aren't at much risk from LLMs right now, but SEO trash that was already semi-automated is easy to throw them at.
> It might sound dystopian, but setting emotions aside
Off-topic, but setting emotions aside means no decisions ever get made because inductive reasoning (and therefore all of prediction) is an emotional process and has zero grounding in rationality.
You can't operate in reality without emotion because reality doesn't follow any guaranteed logic we've discovered.
>Task Allocation: Using real-time data on each engineer's strengths, past performance, learning curve, and even their preferred working hours, EMAI allocates tasks from the backlog. It uses predictive modeling to optimize for both efficiency and team satisfaction.
I can tell from above that this "AI" doesn't actually know what a good engineering manager does
This may be the single most dysfunctional idea ever posted to this site. Shall we go over it point by point?
> Morning stand up meetings:
Meetings are synchronous time for human beings to interact with each other because we don't know what they are going to share. Replicating this with a machine that is going to asynchronously process all inputs including direct input on your tickets and direct work makes absolutely no sense.
> analyzing voice tones for stress or uncertainty
This is a creepy way to manage as a person. Applied by a machine it is HAL9000 levels of creepy it just trains people to talk like robots to the robot so that HAL doesn't bother or use it as a data point counting towards them getting later terminated.
> Conflict Resolution: Using its vast knowledge base and understanding of human psychology...
Humans are incredibly bad at psychology and its literally mostly snake oil and impossible to replicate nonsense.
> Training & Upgradation: EMAI monitors the latest tech trends. If a new tool or technology emerges in the market, it identifies which team members would benefit most from training and automatically schedules online courses or tutorials for them.
In what universe would this result in a better result than just asking people what they would like to learn
> End-of-Day Reports: Every team member receives a personalized report detailing their accomplishments, areas of improvement, and resources for further learning. These reports aren't just data-driven and include motivational feedback designed to boost morale and foster continuous learning.
Motivation is motivational because it demonstrates that your work is important enough that manager bob took the time out of his schedule to praise it particularly. Automating it and having a computer do it makes it worse than useless. It's telling your people that they are so worthless that having a fake robot generate fake praise is all they are worth. It's like taking the much memed pizza party to "boost moral" and taking it to the next level by delivering pictures of pizzas instead of pies.
> EMAI also manages to keep stakeholders informed, and it can negotiate with them to find the best solution given their inputs and the business context.
If your interests are represented by a URL which you can babble at chatGPT you aren't a stakeholder.
It’s utterly irresponsible. The “efficiencies” are beside the point, apart from being purely theoretical.
There is no such thing as an AI manager. It’s just an automated todo list. But when I have a problem I need to talk with someone in charge. Machines are not in charge. Somebody owns the machine.
- Developers, usually position themselves as MASTERS of machines (sure, I'm also dev, and sure, I feel myself father of my semiconductor little pet), but article describes, how devs built slavery, where MACHINE is master :)))
People don't like "real-time" monitoring of themselves. It induces anxiety, which then turns into anger, and defiance. People will game the system to their benefit.
The article presents that engineers need human connection and empathy at work. Then says that we don't do it well, so we might as well get rid of it. At least if I read that right.
I say this as a manager (of ics and managers) - this would be amazing. Especially if it also meant taking away the admin aspects, the project management (which was fine but still), the politicking, the perf/promo committee mud slinging, the "sell upper management bs as your own" bs, the hiring and having to explain why you aren't able to hire the Jeff deans of the world for peanuts, selling "leadership rubriks", explaining how layoffs are good for the laid off, etc!!
In the last 5 years or so the role of manager had gone from those with accountability "with" resources/authority to just accountability and no support or resources. Pretty shit deal unless you were a true sociopath leaving the well intentioned ones stranded.
"[...] without a clear indicator of the author's intent, any parodic or sarcastic expression of extreme views can be mistaken for a sincere expression of those views."
sarchertech|2 years ago
Task Allocation: Using real-time data on each engineer's strengths, past performance, learning curve, and even their preferred working hours, EMAI allocates tasks from the backlog. It uses predictive modeling to optimize for both efficiency and team satisfaction.
Conflict Resolution: If two engineers have a disagreement or are blocked by each other, EMAI steps in. Using its vast knowledge base and understanding of human psychology (aided by its training data), it mediates discussions, ensuring a harmonious team environment.
Training & Upgradation: EMAI monitors the latest tech trends. If a new tool or technology emerges in the market, it identifies which team members would benefit most from training and automatically schedules online courses or tutorials for them.
End-of-Day Reports: Every team member receives a personalized report detailing their accomplishments, areas of improvement, and resources for further learning. These reports aren't just data-driven and include motivational feedback designed to boost morale and foster continuous learning."
It'll be a cold day in hell before I work 5 minutes under those conditions.
gs17|2 years ago
I feel like there's a Dilbert (pre-cancellation) strip in this with the AI ending up assigning everyone no work because everyone's "preferred working hours" are no hours and getting paid to do nothing leads to the most team satisfaction.
throwaway914|2 years ago
In 2.0 they'll catch on and implement performance improvement plans that lead to separation.
If I try to pretend this thing has good intentions at heart, I think it'd be great for some folks to improve based on AI recommendations where they can shed their ego. Harder to do in front of a project manager.
I would expect AI regulation to speed up real quick if it starts making middle management redundant :-)
stavros|2 years ago
It's going to be a very slow burn (if you aren't just replaced with an AI), and no point will seem worth quitting over, until you're managed by the AI.
clnq|2 years ago
thih9|2 years ago
It’s a smooth slope towards more ai assistance.
Unless more people start thinking like you.
[1]: This is more creepy than it initially sounds. The notes are detailed but paraphrased in a formal way. Every on topic question or off topic remark is there, with attribution.
analog31|2 years ago
wink|2 years ago
dinvlad|2 years ago
faichai|2 years ago
pg_1234|2 years ago
It's still better than the current human managers, who aspire to this, but fail due to incompetence, laziness and petty biases.
burkaman|2 years ago
dartos|2 years ago
There's no machine learning model that is truly objective. They're all biased due to their, usually human generated, datasets. It's impossible to account sufficiently for every scenario in a training set, so these models just give an objective veneer to the biases of those that created the dataset.
This phenomena is well documented with predictive models for crime.
Many arrests happen in low-income areas. The data on arrests skew towards those areas. The predictive models are trained on that data. Using that data, police make more arrests in low income areas. Those arrests get added to the data set Rinse and repeat.
Replace police, arrests, and low-income with anything and it's still true
For example: Company Leadership, promotions, race
TeMPOraL|2 years ago
Unintentional feedback loop amplifying the thing being measured is a problem, yes, but it doesn't stem from predictions themselves - it's decisions and actions informed by the predictions that can amplify the problem instead of reducing it.
0xcafefood|2 years ago
GravityLab|2 years ago
People need a concrete goal or specific feature to work on, one that takes time and space to work on. You can't easily or usually create units of measurable work where everything is a story point or a widget. In fact I'd say most of the time the real work isn't like that at all. That's just not how software development works, at least in environments that are conducive to real software engineering.
Any EMAI outputs or recommendations that adhere to and respect the reality of actual software engineering will be of limited value to a business head or a scrum master. It'll offer things like how to improve your work flow, or what tools would benefit your work flow. These things don't translate into more story points any time soon, and certainly not within a matter of days...
> Using real-time data on each engineer's strengths, past performance, learning curve, and even their preferred working hours, EMAI allocates tasks from the backlog. It uses predictive modeling to optimize for both efficiency and team satisfaction.
> End-of-Day Reports: Every team member receives a personalized report detailing their accomplishments, areas of improvement, and resources for further learning. These reports aren't just data-driven and include motivational feedback designed to boost morale and foster continuous learning.
If it's allocating tasks this way from a backlog and trying to give you daily reports, this just sounds like something that would be of interest to a ticket farm rather than to a tech company that is really building software.
simonw|2 years ago
Prompt injection is an attack against AI gullibility.
Gullibility is not a characteristic of competent managers. One of the most important jobs of managers is to be able to see through bullshit and figure out what's actually going on.
I am extremely skeptical that the current generation of AI is capable of doing that.
tornato7|2 years ago
NumberWangMan|2 years ago
Animats|2 years ago
Time to re-read Marshall Brain's "Manna".
Terr_|2 years ago
TLDR: Fiction about dystopic-vs-utopic outcomes from AI-management.
dools|2 years ago
I was a product manager but not really a project manager. I was also a tech lead: when things went awry and someone couldn't figure out how to get it unstuck I would unstick it, but in general the system just produced functional software. My primary inputs were sketches at the start and ongoing client feedback and so on.
All the workers were in different locations, in completely different timezones, and they all reported a high level of satisfaction. So I don't even think you need AI, you just need better procedures.
yodon|2 years ago
0xcafefood|2 years ago
anotherjesse|2 years ago
elwell|2 years ago
staunton|2 years ago
GravityLab|2 years ago
Madmallard|2 years ago
slowmovintarget|2 years ago
mcphage|2 years ago
> A COMPUTER CAN NEVER BE HELD ACCOUNTABLE
> THEREFORE A COMPUTER MUST NEVER MAKE A MANAGEMENT DECISION
(Via https://twitter.com/SwiftOnSecurity/status/13855657371677245...)
lifeisstillgood|2 years ago
(bit too negative - but basically i disagree)
You do not ask a human how the computer is doing. You see the working code. If the working code is running great, if not bad. But you don't ask the human. you ask the test suite.
I mean, I see a different end point for software orgs - I call it the "whole org test rig". Every part of a companies current processes is digitised (future changes and improvements are yet to be committed) but the sales people will pitch using software that tells them who to pitch to and when, the customer service agent is probably already a bit etc etc.
And when a whole org is "in code" then you can set up test environments - run sensitivity tests, try out new applications and new services and ensure the training is ready and ...
basically most management is co-ordination. And if you can just test then the co-ordination sits in the test rig
im_down_w_otp|2 years ago
JohnFen|2 years ago
It doesn't just sound dystopian, it is dystopian.
jrockway|2 years ago
theendisney|2 years ago
To get rid of managers you would need to delegate tasks it is bad at. Seems doable enough.
Fine tuning for empaty and social qualities also seems doable if you can certify, validate and gurantee it.
Human managers are useful to keep business logic dumb and stupid, AI would make things much more complex.
Also facinating is the option to say it like it is. There needs not be any hidden agenda aimed at promotion. The thing has tenure!
JohnFen|2 years ago
In the real world, "computer" used to be a human job title.
goodroot|2 years ago
> It might sound dystopian, but setting emotions aside and viewing it purely from a business perspective, the idea of replacing engineering managers with AI offers potential efficiencies.
A manager is there precisely to optimize for business objectives. They are not your paid friend, therapist or life coach.
The "AI" sea change that obsoletes the manager will first replace the producer.
Consider it from the present day case of out-sourced labour, which is as real and present as AGI.
Managers are more likely to be valued when out-sourcing production, as business/human organization and communication become the bottleneck vs. productive capacity.
If the value of production is driven even lower via generative automation, such that automation is cheaper than outsourcing, then managers are at risk because they exist by ratio relative to the productive labour force. Out-sourcing often leads to an expanded labour force due to market imbalances (3 for the price of 1!). This results in an increase in management before automation >first< reduces the size of the labour force, which only then reduces the need for management.
rand846633|2 years ago
A office therapist could actually be precisely what is needed to effectively optimize and alight people for business objectives!
swatcoder|2 years ago
Because if it was earnestly presenting core engineering manager job responsibility at SV tech companies right now, then the whole sector has satirized itself. Again.
The stuff it describes is babysitter work for weak teams, which is helpful for a manager to be able to provide but takes away from what they can actually add to a team when relieved from doing so.
matt_s|2 years ago
Whoever came up with this fundamentally doesn’t understand what it’s like to be a people manager. Nowhere does it mention trying to resolve conflicts with people outside the team. I’m not talking about petty conflicts but business conflicts like conflicting requests/direction and lots of ambiguity. Administrative tasks a people manager does is a minimal part of the job and could be automated but it would take longer to write the software to do the automation than to just click the stupid buttons in the HR/Payroll tool.
lawlessone|2 years ago
esafak|2 years ago
encoderer|2 years ago
The things mentioned in the article like stand ups aren’t even orchestrated by managers in a lot of companies and besides are a tiny aspect of the job.
Get real dudes.
JohnFen|2 years ago
They shouldn't be orchestrated by managers in any company. Same with handling the backlog. That managers get involved in these things is one of the ways that agile has gone so wrong.
visarga|2 years ago
jehb|2 years ago
To be fair, though, a lot of people I know have only ever had terrible managers.
gs17|2 years ago
WendyTheWillow|2 years ago
Off-topic, but setting emotions aside means no decisions ever get made because inductive reasoning (and therefore all of prediction) is an emotional process and has zero grounding in rationality.
You can't operate in reality without emotion because reality doesn't follow any guaranteed logic we've discovered.
I wish more people understood this.
replyifuagree|2 years ago
I can tell from above that this "AI" doesn't actually know what a good engineering manager does
michaelmrose|2 years ago
> Morning stand up meetings:
Meetings are synchronous time for human beings to interact with each other because we don't know what they are going to share. Replicating this with a machine that is going to asynchronously process all inputs including direct input on your tickets and direct work makes absolutely no sense.
> analyzing voice tones for stress or uncertainty
This is a creepy way to manage as a person. Applied by a machine it is HAL9000 levels of creepy it just trains people to talk like robots to the robot so that HAL doesn't bother or use it as a data point counting towards them getting later terminated.
> Conflict Resolution: Using its vast knowledge base and understanding of human psychology...
Humans are incredibly bad at psychology and its literally mostly snake oil and impossible to replicate nonsense.
> Training & Upgradation: EMAI monitors the latest tech trends. If a new tool or technology emerges in the market, it identifies which team members would benefit most from training and automatically schedules online courses or tutorials for them.
In what universe would this result in a better result than just asking people what they would like to learn
> End-of-Day Reports: Every team member receives a personalized report detailing their accomplishments, areas of improvement, and resources for further learning. These reports aren't just data-driven and include motivational feedback designed to boost morale and foster continuous learning.
Motivation is motivational because it demonstrates that your work is important enough that manager bob took the time out of his schedule to praise it particularly. Automating it and having a computer do it makes it worse than useless. It's telling your people that they are so worthless that having a fake robot generate fake praise is all they are worth. It's like taking the much memed pizza party to "boost moral" and taking it to the next level by delivering pictures of pizzas instead of pies.
> EMAI also manages to keep stakeholders informed, and it can negotiate with them to find the best solution given their inputs and the business context.
If your interests are represented by a URL which you can babble at chatGPT you aren't a stakeholder.
satisfice|2 years ago
There is no such thing as an AI manager. It’s just an automated todo list. But when I have a problem I need to talk with someone in charge. Machines are not in charge. Somebody owns the machine.
29athrowaway|2 years ago
Is it done? Is it done? And then? And then? https://youtu.be/oqwzuiSy9y0
simne|2 years ago
- Developers, usually position themselves as MASTERS of machines (sure, I'm also dev, and sure, I feel myself father of my semiconductor little pet), but article describes, how devs built slavery, where MACHINE is master :)))
TrackerFF|2 years ago
clnq|2 years ago
elwell|2 years ago
justinzollars|2 years ago
unknown|2 years ago
[deleted]
johnea|2 years ago
flashgordon|2 years ago
In the last 5 years or so the role of manager had gone from those with accountability "with" resources/authority to just accountability and no support or resources. Pretty shit deal unless you were a true sociopath leaving the well intentioned ones stranded.
kwhitefoot|2 years ago
Yeah, right, just like the data.
mlhpdx|2 years ago
pydry|2 years ago
snerbles|2 years ago
"[...] without a clear indicator of the author's intent, any parodic or sarcastic expression of extreme views can be mistaken for a sincere expression of those views."
https://en.wikipedia.org/wiki/Poe's_law