My management chain has recently mandated the use of AI during day-to-day work, but also went the extra step to mandate that it make us more productive, too. Come annual review time, we need to write down all the ways AI made our work better. That positive outcome is pre-supposed: there doesn't seem to be any affordance for the case where AI actually makes your work worse or slower. I guess we're supposed to ignore those cases and only mention the times it worked.
It's kind of a mirror image of the global AI marketing hype-factory: Always pump/promote the ways it works well, and ignore/downplay when it works poorly.
Just ask an Ai to write how it made you more productive in daily work. It's really good at that. You can pad it out to 1m words by asking it to expand on each section of with subsections.
I was in a lovely meeting where a senior "leader" was looking at effort estimates and said "Do these factor in AI-tools? Seems like it should be at least 30% lower if it did."
Like I use AI tools, I even like using them, but saying "this tool is so good it will cut our dev time by 30%" should be coming from the developers themselves or their direct manager. Otherwise they are just making figures up and forcing them onto their teams.
« AI has made me productive by writing most of the answer to this question. You may ignore everything after this sentence, it is auto-generated purely from the question, without any intersection with reality. »
> My management chain has recently mandated the use of AI during day-to-day work, but also went the extra step to mandate that it make us more productive, too. Come annual review time, we need to write down all the ways AI made our work better.
Bloody hell. That feels like getting into borderline religious territory.
Fascinating example of corporate double-speak here!
> My management chain has recently mandated the use of AI during day-to-day work, but also went the extra step to mandate that it make us more productive, too.
Now they're on record as pro-AI while the zeitgeist is all about it, but simultaneously also having plausible deniability if the whole AI thing crumbles to ashes: "we only said to use it if it helped productivity!"
This must be how conspiracy theorists feel. How could a whole class of people (the professional managerial class) all decide at once that AI was a wonderful too we all must adopt now, and it's all going to make all of us more productive and we're 100% certain about it? It boggles the mind. I'm sure just it's just social contagion, hype, and profit motive, but it definitely feels like a conspiracy sometimes.
It’s kind of a good way to make your business collapse though, because figuring out the kinds of problems where LLMs are useful and where they’ll destroy your productivity is extremely important
I wonder how much this has to do with the LinkedIn world where everyone is making "I made us 100% more efficient last week with AI!" type stuff.
I'm not normally on LinkedIn but recently was and with the AI stuff the "look at me" spam around AI seems like an order of magnitude more absurd than usual.
I suspect a lot of companies that go that route are pushing a marketing effort since they themselves have a stake in AI.
But I'd love to hear from truly customer only businesses, where AI is pure cost, with no upside, unless it truly pays for itself in business impact, and if they too are stuck in some justifying of their added cost loop to make their decision seem a good one no matter what, or if they are being more careful?
Look, for most corporate jobs, there's honestly no way that you truly cannot find any kind or level of usage of AI tools to make you at least a bit more productive -- even if it's as simple as helping draft emails, cleaning up a couple lines of code here and there, writing a SQL query faster because you're rusty with it, learning a new framework or library faster than you would have otherwise, learning a new concept to work with a cross-functional peer, etc. It does not pass the smell test that you could find absolutely nothing for most corporate jobs. I'd hazard a guess that this attitude, which borders on outright refusal to engage in a good-faith manner, is what they're trying to combat or make unacceptable.
We call these workers “pilots,” as opposed to “passengers.” Pilots use gen AI 75% more often at work than passengers, and 95% more often outside of work.
Identify a real issue with the technology, then shift the blame to a made-up group of people who (supposedly) aren't trying hard enough to embrace the technology.
Embody a pilot mindset, with high agency and optimism
Ridiculous, I have it on good authority that embracing the 'hacker ethos' by becoming a 'coding ninja' with a 'wizard' mindset will propel you to next-level synergisms within transformative paradigms like AI and blockchain.
This isn't wrong though. There's obviously two types of people using AI: one is "explain to me how X works", and the other is "do X for me". Same pattern with every technology.
The ai use mandates are odd. My guess is that the c-level execs have very little practical technical skills at this point, probably haven't written a line of code in 20 years. And they believe ALL the ai hype. They think LLM's can do anything, so any employees not using them are clearly wasting time.
Not odd under the theory that they are being done to buy wiggle room to reduce the force later on. They announce the firing and layoff of those who haven't made their forecasted numbers.
Today, I discussed with a product manager who insists on attaching AI generated prototypes to PRDs without any design sessions for exploration or refinement (I’m a UX designer). These prototypes contain many design issues that I must review and address each time.
Worse still, they look polished and creates the illusion that the work is nearly complete. So instead of moving faster, we end up with more back and forth about the AI miss interpretations.
My CEO sent an ai generated blog today. I've never felt more frustrated reading something in my life. "x happened, here's what it means", "groundbreaking", "game-changer", "significant", "forefront of a technological shift"
I refuse to read anything that seems to be obviously AI generated. If they can't be bothered to write down what they think then I don't have any reason to bother with reading what they've posted either.
I've never yet accepted an AI written answer when responding to my emails all though I try it routinely. Mostly it just doesn't capture my style. But even when it does, there's some kind of essential spark missing.
I think a lot about the concept that the AI output is still 99% regression to a mean of some kind. In that sense, the part it can generate for you is all the boring stuff - what doesn't add value. And to be sure, if you're writing an email etc, a huge amount of that is boring filler, most of the time. But the part it specifically cannot do is the only part that matters - the original, creative part.
The filler was never important anyway. Physically typing text was never the barrier. It's finding time and space to have the creative thought necessary to put into the communication that is the challenge. And the AI really doesn't help at all with that.
My friends job of late has basically become reviewing AI-generated slop his non-technical boss is generating that mostly seems to work and proving why it's not production-ready.
Last week he was telling me about a PR he'd received. It should have been a simple additional CRUD endpoint, but instead it was a 2,000+ loc rats nest adding hooks that manually manipulated their cache system to make it appear to be working without actually working.
He spent most of his day explaining why this shouldn't be merged.
More and more I think Brandolini's law applies directly to AI-generated code
> The amount of [mental] energy needed to refute ~bullshit~ [AI slop] is an order of magnitude bigger than that needed to produce it.
He wants to build a website that will turn him into a bazillionaire.
He asks AI how to solve problem X.
AI provides direction, but he doesn't quite know how to ask the right questions.
Still, the AI manages to give him a 70% solution.
He will go to his grave before he learns enough programming to do the remaining 30% himself, or, understand the first 70%.
Delegating to AI isn't the same as delegating to a human. If you mistrust the human, you can find another one. If you mistrust the AI, there aren't many others to turn to, and each comes with an uncomfortable learning curve.
> The amount of [mental] energy needed to refute ~bullshit~ [AI slop] is an order of magnitude bigger than that needed to produce it
I see this in code-reviews where AI tools like code-rabbit and greptile are producing workslop in enormous quantities. It is sucking up enormous amount of human energy just reading the nicely formatted bs put out by these tools. All of that for finding an occasional nugget that turns out to be useful.
I largely agree. As a counterpoint, today I delivered a significant PR that was accepted easily by the lead dev with the following approach:
1. Create a branch and vibe code a solution until it works (I'm using codex cli)
2. Open new PR and slowly write the real PR myself using the vibe code as a reference, but cross referencing against existing code.
This involved a fair few concepts that were new to me, but had precedent in the existing code. Overall I think my solution was delivered faster and of at least the same quality as if I'd written it all by hand.
I think its disrespectful to PR a solution you don't understand yourself. But this process feels similar to my previous non-AI assisted approach where I would often code spaghetti until the feature worked, and then start again and do it 'properly' once I knew the rough shape of the solution
The problem with most corporate work that these managerial idiots want replaced with AI is that is all so utterly useless. Reports written that no one will ever read, presentations made for the sake of the busy-ness of "making a deck", notes and minutes of meetings that should never have taken place in the first place. Summaries written by AI of longer-form work that are then shoved into AI to make sense of the AI-written summary.
I like the quote in the middle of the article: "creating a mentally lazy, slow-thinking society that will become wholly dependant [sic] upon outside forces". I believe that orgs that fall back on the AI lie, who insist on schlepping slop from one side to the other, will be devoured by orgs that see through the noise.
It's like code. The most bug-free code are those lines that are never written. The most productive workplace is the one that never bothers with that BS in the first place. But, promotions and titles and egos are on the line so...
AI in its current form, like the swirling vortex of corporate bilge that people are forced to swim through day after day after day to, can't die fast enough.
> Summaries written by AI of longer-form work that are then shoved into AI to make sense of the AI-written summary.
Also the problem where someone has bullet-points, they fluff them up in an LLM, send the prose, and then the receiver tries to use an LLM to summarize it back down to bullet-points.
I may be over-optimistic in predicting that eventually everyone involved will rage-flip the metaphorical table, and start demanding/sending the short version all the time, since there's no longer anything to be gained by prettying it up.
So many times you write a document for someone to review and question that it could be written better. Yet the reader will now just ask for bullet points from the AI. I was hoping people would go, right, let's just start writing bullet points from the get go and not bother with a full document.
Once you allow AI to replace the process, you kind of reveal that the process never mattered to you, if you want faster pace at the expense of other things, you don't need to pay for AI, just drop the unnecessary process.
I feel AI is now just a weird excuse, like you're pretending you have not lowered the quality and stopped writing proper documents, professional emails, full test suites, properly reviewed each other's code, no, you still do all this, just not you personally, it's "automated".
It's like cutting corners but being able to pretend like the corner isn't cut, because AI still fully takes the corner :p
So true. We used to appoint someone in the group to take notes. These notes were always correct, to the point, short and easy to read.
Now our manager(s) are heavily experimenting with recording all meetings and desperately trying to produce useful reports using all sorts of AI tools. The output is always lengthy and makes the manager super happy. Look, amazing reports! But on closer inspection they're consistently incomplete one way or another, sometimes confidently incorrect and full of happy corpo mumbo jumbo. More slop to wade through, when looking for factual information later on.
Our manager is so happy to report that he's using AI for everything. Even in cases where I think completeness and correctness is important. I honestly think it's scary how quickly that desire for correctness is gone and replaced with "haha this is cool tech".
Us devs are much more reluctant. We don't want to fall behind, but in the end when it comes to correctness and accountability, we're the ones responsible. So I won't brainlessly dump my work into an LLM and take its word for granted.
The problem with corporate work is that it exists - that corporations exist.
You do have the option to spend your time elsewhere - if you can handle every NPC friend and family member thinking you've lost your mind when you quit that cushy corporate gig and go work a low status, low pay job in peace and quiet - something like a night time security guard.
I've spent 16 years and I won't exactly be cheering if we hit a wall with AI.
I love programming, but I also love building things. When I imagine what having an army of mid-level engineers that genuinely only need high level instruction to reliably complete tasks, and don't require raising hundreds of millions while become beholden to some 3rd party, would let me build... I get very excited.
The “workslop” idea captures something deeper than just bad AI output. It shows how the optimization trap rewards speed and polish even when substance is missing, which then widens the authenticity gap and erodes trust. It can be seen as a larger pattern called reality drift, alongside things like synthetic realness and filter fatigue.
For anyone curious, there’s a working paper circulating that goes into this in more detail:
https://figshare.com/articles/preprint/Workslop_and_the_Opti...
The article as I see it is just one paragraph that end "So much activity, so much enthusiasm, so little return. Why?" Is there more if you're a subscriber to Harvard Business Review?
My manager has Claude leave a two-page worthless comment on every single PR we submit, uses copilot to write 20-line "summaries" of 5-line changesets, and once sent my coworker and I a Claude-generated document that was so unclear in its purpose that we both replied "what even is this" after which the manager just ghosted the slack thread and pretended it never even happened.
I won't trust any AI productivity research if the test subjects do not have access to unlimited tokens/GPUs. I see how much productivity gain first hand when you can burn $20 of tokens to save you 30 minutes of human work or the miracle that will happen when you are burning several hundred dollars of tokens a day.
Even the OpenAI or Claude $200 plan doesn't give you enough tokens to make you truly productive. The true ROI measurement should be the token cost over your hourly wage times the time you had saved if you have to do it by hand
I'm dealing with this now. It is the bane of my existence.
Paragraphs of content spread out over pages and pages of nothing. Laziness at enterprise scale.
Deep research reports are even worse. They will cite AI-generated content in their work, which makes verifying their sources an O(n^2) task since I, now, have to find the sources _those_ sources cited to find the truth.
This generation of AI is the worst thing to happen to humanity since social media.
It'll be very funny if any AI productivity gains are balanced by productivity loss due to slop - all the while using massive amounts of electricity to achieve nothing.
I don’t think AI slop is inherently mandatory, but I worry that the narrative around AI will devalue engineering work enough that it becomes impossible to avoid.
Is it the “workslop” that is causing the problem, or the slop that companies demand and that passes for work in the first place? Really wanna summon the ghost of David Graeber (“Bullshit Jobs”) here: if you’re a manager who demands your employees to produce PowerPoints about the TPS reports, you probably shouldn’t be surprised when you get meaningless LLM argle-bargle in return.
The thing about companies asking for slop is that a middle manager maintaining usual stream of vacuous text is a proxy for that person paying attention to a given set of problems and alerting others. AI becomes a problem because someone can maintain a vacuous text stream without that attention.
The AI revolution 2022-2030 is a speed run of the IT revolution of 1970-2000. In other words, how to 1000x management overhead while reducing real productivity, meanwhile skyrocketing nominal productivity.
AI is functionally equivalent to disinformation as it automates the dark matter of communication/language, transfers the status back to the recipient, it teaches receivers that units contents are no longer valid in general and demands a tapeworm format to replace what is being trained on.
If you were around for the heyday of Markov chain email and Usenet spam this whole thing is familiar. Sure AI slop generation is not directly comparable to Markov process and generated texts are infinitely smoother yet it has similar mental signature. I believe this similarity puts me squarely in the offended 22%.
ryandrake|5 months ago
It's kind of a mirror image of the global AI marketing hype-factory: Always pump/promote the ways it works well, and ignore/downplay when it works poorly.
noosphr|5 months ago
belval|5 months ago
Like I use AI tools, I even like using them, but saying "this tool is so good it will cut our dev time by 30%" should be coming from the developers themselves or their direct manager. Otherwise they are just making figures up and forcing them onto their teams.
Yoric|5 months ago
pjc50|5 months ago
rsynnott|5 months ago
Bloody hell. That feels like getting into borderline religious territory.
yifanl|5 months ago
mattgreenrocks|5 months ago
> My management chain has recently mandated the use of AI during day-to-day work, but also went the extra step to mandate that it make us more productive, too.
Now they're on record as pro-AI while the zeitgeist is all about it, but simultaneously also having plausible deniability if the whole AI thing crumbles to ashes: "we only said to use it if it helped productivity!"
Do you see? They cannot be wrong.
gdulli|5 months ago
Before you make any decision, ask yourself: "Is this good for the company?"
everdrive|5 months ago
thatfrenchguy|5 months ago
unknown|5 months ago
[deleted]
duxup|5 months ago
I'm not normally on LinkedIn but recently was and with the AI stuff the "look at me" spam around AI seems like an order of magnitude more absurd than usual.
didibus|5 months ago
I suspect a lot of companies that go that route are pushing a marketing effort since they themselves have a stake in AI.
But I'd love to hear from truly customer only businesses, where AI is pure cost, with no upside, unless it truly pays for itself in business impact, and if they too are stuck in some justifying of their added cost loop to make their decision seem a good one no matter what, or if they are being more careful?
pkaye|5 months ago
That is where the AI come into full use.
cjbgkagh|5 months ago
vrighter|5 months ago
obezyian|5 months ago
Everything sounded very mandatory, but a couple of months later nobody was asking about reports anymore.
meindnoch|5 months ago
cyanydeez|5 months ago
unknown|5 months ago
[deleted]
contingencies|5 months ago
nilkn|5 months ago
romaniv|5 months ago
Identify a real issue with the technology, then shift the blame to a made-up group of people who (supposedly) aren't trying hard enough to embrace the technology.
Embody a pilot mindset, with high agency and optimism
Thanks for the career advice.
lkey|5 months ago
diegof79|5 months ago
anal_reactor|5 months ago
jjk166|5 months ago
Fly away from here at high speed
vkou|5 months ago
If you're a low-level office drone, you are not a pilot.
nphardon|5 months ago
arwhatever|5 months ago
gosub100|5 months ago
diegof79|5 months ago
Today, I discussed with a product manager who insists on attaching AI generated prototypes to PRDs without any design sessions for exploration or refinement (I’m a UX designer). These prototypes contain many design issues that I must review and address each time.
Worse still, they look polished and creates the illusion that the work is nearly complete. So instead of moving faster, we end up with more back and forth about the AI miss interpretations.
ramoz|5 months ago
everdrive|5 months ago
ManlyBread|5 months ago
meindnoch|5 months ago
This question applies whether it's written by an AI or not.
unknown|5 months ago
[deleted]
zmmmmm|5 months ago
I think a lot about the concept that the AI output is still 99% regression to a mean of some kind. In that sense, the part it can generate for you is all the boring stuff - what doesn't add value. And to be sure, if you're writing an email etc, a huge amount of that is boring filler, most of the time. But the part it specifically cannot do is the only part that matters - the original, creative part.
The filler was never important anyway. Physically typing text was never the barrier. It's finding time and space to have the creative thought necessary to put into the communication that is the challenge. And the AI really doesn't help at all with that.
donatj|5 months ago
Last week he was telling me about a PR he'd received. It should have been a simple additional CRUD endpoint, but instead it was a 2,000+ loc rats nest adding hooks that manually manipulated their cache system to make it appear to be working without actually working.
He spent most of his day explaining why this shouldn't be merged.
More and more I think Brandolini's law applies directly to AI-generated code
> The amount of [mental] energy needed to refute ~bullshit~ [AI slop] is an order of magnitude bigger than that needed to produce it.
givemeethekeys|5 months ago
He wants to build a website that will turn him into a bazillionaire.
He asks AI how to solve problem X.
AI provides direction, but he doesn't quite know how to ask the right questions.
Still, the AI manages to give him a 70% solution.
He will go to his grave before he learns enough programming to do the remaining 30% himself, or, understand the first 70%.
Delegating to AI isn't the same as delegating to a human. If you mistrust the human, you can find another one. If you mistrust the AI, there aren't many others to turn to, and each comes with an uncomfortable learning curve.
matheusmoreira|5 months ago
"Explain to me in detail exactly how and why this works, or I'm not merging."
This should suffice as a response to any code the developer did not actively think about before submitting, AI generated or not.
oblio|5 months ago
https://www.joelonsoftware.com/2000/04/06/things-you-should-... (read the bold text in the middle of the article)
These articles are 25 years old.
bwfan123|5 months ago
I see this in code-reviews where AI tools like code-rabbit and greptile are producing workslop in enormous quantities. It is sucking up enormous amount of human energy just reading the nicely formatted bs put out by these tools. All of that for finding an occasional nugget that turns out to be useful.
RobinL|5 months ago
1. Create a branch and vibe code a solution until it works (I'm using codex cli)
2. Open new PR and slowly write the real PR myself using the vibe code as a reference, but cross referencing against existing code.
This involved a fair few concepts that were new to me, but had precedent in the existing code. Overall I think my solution was delivered faster and of at least the same quality as if I'd written it all by hand.
I think its disrespectful to PR a solution you don't understand yourself. But this process feels similar to my previous non-AI assisted approach where I would often code spaghetti until the feature worked, and then start again and do it 'properly' once I knew the rough shape of the solution
jjgreen|5 months ago
rickydroll|5 months ago
theideaofcoffee|5 months ago
I like the quote in the middle of the article: "creating a mentally lazy, slow-thinking society that will become wholly dependant [sic] upon outside forces". I believe that orgs that fall back on the AI lie, who insist on schlepping slop from one side to the other, will be devoured by orgs that see through the noise.
It's like code. The most bug-free code are those lines that are never written. The most productive workplace is the one that never bothers with that BS in the first place. But, promotions and titles and egos are on the line so...
AI in its current form, like the swirling vortex of corporate bilge that people are forced to swim through day after day after day to, can't die fast enough.
Terr_|5 months ago
Also the problem where someone has bullet-points, they fluff them up in an LLM, send the prose, and then the receiver tries to use an LLM to summarize it back down to bullet-points.
I may be over-optimistic in predicting that eventually everyone involved will rage-flip the metaphorical table, and start demanding/sending the short version all the time, since there's no longer anything to be gained by prettying it up.
didibus|5 months ago
Once you allow AI to replace the process, you kind of reveal that the process never mattered to you, if you want faster pace at the expense of other things, you don't need to pay for AI, just drop the unnecessary process.
I feel AI is now just a weird excuse, like you're pretending you have not lowered the quality and stopped writing proper documents, professional emails, full test suites, properly reviewed each other's code, no, you still do all this, just not you personally, it's "automated".
It's like cutting corners but being able to pretend like the corner isn't cut, because AI still fully takes the corner :p
mavamaarten|5 months ago
Our manager is so happy to report that he's using AI for everything. Even in cases where I think completeness and correctness is important. I honestly think it's scary how quickly that desire for correctness is gone and replaced with "haha this is cool tech".
Us devs are much more reluctant. We don't want to fall behind, but in the end when it comes to correctness and accountability, we're the ones responsible. So I won't brainlessly dump my work into an LLM and take its word for granted.
alexashka|5 months ago
You do have the option to spend your time elsewhere - if you can handle every NPC friend and family member thinking you've lost your mind when you quit that cushy corporate gig and go work a low status, low pay job in peace and quiet - something like a night time security guard.
drivingmenuts|5 months ago
BoorishBears|5 months ago
I love programming, but I also love building things. When I imagine what having an army of mid-level engineers that genuinely only need high level instruction to reliably complete tasks, and don't require raising hundreds of millions while become beholden to some 3rd party, would let me build... I get very excited.
realitydrift|5 months ago
tonymet|5 months ago
New flow: please run RCA through chatgpt and forward to your manager who will run it through chat GPT and send to the customer.
RCA is now 10x longer, only 10% accurate, and took 3x longer to get to the customer.
Animats|5 months ago
keyshapegeo99|5 months ago
queenkjuul|5 months ago
I'm looking for a new job.
suzzer99|5 months ago
hevangel|5 months ago
Even the OpenAI or Claude $200 plan doesn't give you enough tokens to make you truly productive. The true ROI measurement should be the token cost over your hourly wage times the time you had saved if you have to do it by hand
luPowB6aAcRZcFr|5 months ago
Paragraphs of content spread out over pages and pages of nothing. Laziness at enterprise scale.
Deep research reports are even worse. They will cite AI-generated content in their work, which makes verifying their sources an O(n^2) task since I, now, have to find the sources _those_ sources cited to find the truth.
This generation of AI is the worst thing to happen to humanity since social media.
AlexandrB|5 months ago
Bukhmanizer|5 months ago
paultopia|5 months ago
joe_the_user|5 months ago
The thing about companies asking for slop is that a middle manager maintaining usual stream of vacuous text is a proxy for that person paying attention to a given set of problems and alerting others. AI becomes a problem because someone can maintain a vacuous text stream without that attention.
So it's likely to become an arms-race.
unknown|5 months ago
[deleted]
tonymet|5 months ago
polloslocos|5 months ago
kazinator|5 months ago
mensetmanusman|5 months ago
Now that ai makes my programming 10x more efficient, I will work 5x less destroying “half of my” productivity.
dsterry|5 months ago
drivingmenuts|5 months ago
httpsoverdns|5 months ago
c-linkage|5 months ago
mallowdram|5 months ago
AI is functionally equivalent to disinformation as it automates the dark matter of communication/language, transfers the status back to the recipient, it teaches receivers that units contents are no longer valid in general and demands a tapeworm format to replace what is being trained on.
backprop1989|5 months ago
varjag|5 months ago
handfuloflight|5 months ago
[deleted]