> Bob needs a new computer for his job.... In order to obtain a new work computer he has to create a 4 paragraph business case explaining why the new computer will improve his productivity.
> Bob’s manager receives 4 paragraphs of dense prose and realises from the first line that he’s going to have to read the whole thing carefully to work out what he’s being asked for and why. Instead, he copies the email into the LLM.... The 4 paragraphs are summarised as “The sender needs a new computer as his current one is old and slow and makes him unproductive.” The manager approves the request.
"LLM inflation" as a "bad" thing often reflects a "bad" system.
In the case described, the bad system is the expectation that one has to write, or is more likely to obtain a favorable result from writing, a 4 paragraph business case. Since Bob inflates his words to fill 4 paragraphs and the manager deflates them to summarise, it's clear that the 4 paragraph expectation/incentive is the "bad" thing here.
This phenomenon of assigning the cause of "bad" things to LLMs is pretty rife.
In fact, one could say that the LLM is optimizing given the system requirement: it's a lot easier to get around this bad framework.
The 4-paragraph business case was useful for creating friction, which meant that if you couldn't be bothered to write 4 paragraphs you very likely didn't need the computer upgrade in the first place.
This might have been a genuinely useful system, something which broke down with the existence of LLMs.
I love this article for how it gets the thinking, and I love your response.
I've been aware of similar dynamic in politics, where the collective action/intelligence of the internet destroyed all the old signals politicians used to rely on. Emails don't mean anything like letters used to mean. Even phone calls are automated now. Your words and experience matter more in a statistical big data sense, rather than individually.
---
This puts me in sci-fi world-building mode, wondering what the absurd extension is... maybe it's just proving burned time investment. So maybe in an imagined world where LLMs are available to all as extensions of thought via neural implant, you can't be taken seriously for even the simplest direct statements unless you prove your mind sat and did nothing (aka wasted it's time) for some arbitrary period of time. So if you sat in the corner and registered inactive boredom for 2h, and attached a non-renewable proof of that to a written word, then people would take your perspective seriously, because you expended (though not "gave") your limited attention/time to the request for some significant amount of time
The important part is the GDP is now increased because of the cost of energy and additional hardware needed expand and then compress the original data. Think of the economic growth all these new hassles provide!
Engaging with why we might actually want inflation of text:
1) For pedagogical or explanatory purposes. For example, if I were to write:
> ∀x∈R,x^2≥0
I've used 10 characters to say
> For every real number x, it's square is greater than or equal to zero
For a mathematician, the first is sufficient. For someone learning, the second might be better (and perhaps as expansion of 'real number' or that 'square' is 'multiplying it by itself').
2) To make sure everything is stated and explicit. "He finally did x" implies that something has been anticipated/worked on for awhile, but "after a period of anticipation he did x" makes it more clear. This also raises the question of who was anticipating, which could be made explicit too.
As someone who spends a lot of time converting specifications to code (and explaining technical problems to non-technical people), unstated assumptions are very prevalent. And then sometimes people have different conceptions of the unstated assumption (i.e. some people might think that nobody was anticipating, it just took longer than you'd expect otherwise).
So longer text might seem like a simple expansion, but then it ends up adding detail.
I definitely agree with the authors point, I just want to argue that having a text-expander tool isn't quite as useless as 'generate garbage for me'.
The older I get the more concise I find myself (which is not to say I'm actually concise, as my comment history will demonstrate), but LLM's have really driven home just how much noise day to day communication involves. So much filler text.
It still surprises me when I see non-technical enthusiasts get excited about LLMs drafting almost useless copy or email or whatever. So much garbage text no one reads but has to be written for some reason. Its weird.
"I wrote this mail slightly longer because I didn't have time to make it short" - someone famous
When writing something I want people to read, I always take time at the end to make it shorter - remove distracting sentences, unnecessary adjectives and other noise. Really works wonders for team communication.
I write guides for a living, and my audience is largely comprised of non-native speakers. I write simply and unambiguously, and I've been told multiple times that my style seeps through my blog posts, my comments and my text messages.
You are so right! LLMs produce so much noise. If you ask them to be concise, they struggle to cut just the fat, and the output is often vague or misleading. I see that again and again when I ask it to produce different versions of a sentence.
I imagine it's how artists feel about AI art. It seems right at first glance, but you can tell that no thought or craftsmanship went into it.
On one side you have people using LLMs to fluff a sentence in to an essay. And on the receiver side they are hitting a button to AI summarise it back to a sentence.
An LLM is effectively a compressed model of its input data.
Inference is then the decompression stage where it generates text from the input prompt and the compressed model.
Now that compressing and decompressing texts is trivial with LLMs, we humans should focus - in business at least - on communicating only the core of what we want to say.
If the argument to get a new keyboard is: "i like it", then this should suffice, for inflated versions of this argument can be trivially generated.
What I hate about this is that often a novel and interesting idea truly needs extra space to define and illustrate itself, and by virtue of its novelty LLMs will have substantially more difficulty summarizing it correctly. But it sounds like we are heading to a medium-term where people cynically assume any long email must be LLM-generated fluff, and hence nothing is lost by asking for an LLM summary.
> If the argument to get a new keyboard is: "i like it", then this should suffice
This seems like exactly what LLMs are supposed to be good at, according to you, so why don't they just near-losslessly compress the data first, and then train on that?
Also, if they're so good at this, then why are their answers often long-winded and require so much skimming to get what I want?
I'm skeptical LLMs are accurately described as "near lossless de/compression engines".
If you change the temperature settings, they can get quite creative.
They are their algorithm, run on their inputs, which can be roughly described as a form of compression, but it's unlike the main forms of compression we think of - and it at least appears to have emergent decompression properties we aren't used to.
If you up the lossy-ness on a JPEG, you don't really end up with creative outputs. Maybe you do by coincidence, and maybe you only do with LLMs - but at much higher rates.
Whatever is happening does not seem to be what I think people typically associate with simple de/compression.
Theoretically, you can train an LLM on all of Physics, except a few things, and it could discover the missing pieces through reasoning.
Yeah, maybe a JPEG could, too, but the odds of that seem astronomically lower.
The inverse of this is "AI Loopidity" where we burn cycles inflating then deflating information (in emails, say, or in AI code that blows up then gets reduced or summarized). This often also leads to weird comms outcomes, like saving a jpg at 85% a dozen times.
Long documents in business contexts that get summarized and go mostly unread are the byproduct of a specific and common level of trust and accountability in those contexts: people don't believe someone has done enough critical thinking or has a strong enough justification for a proposal unless they've put it on the page, but if it is on the page, it's assumed that it does in fact represent critical thinking and legitimate justification.
If trust was higher, shorter documents would be more desirable. If trust was lower, or accountability higher, summarization would be used a lot more carefully.
LLMs haven't changed anything in this regard except that they've made it extremely easy to abuse trust at that specific level. The long-term result will be that trust will fall in the general case, and people will eventually become more careful about using summarization. I don't think it will be long before productized AI used in business contexts will be pretrained/fine-tuned to perform a basic level of AI content detection or include a qualitative measure of information density by default when performing summarization.
I consider inflation a double insult. (https://ruudvanasseldonk.com/2025/llm-interactions) It says "I couldn't be bothered to spend time writing this myself, but I'm expecting you to read all the fluff."
> That we are using LLMs for inflation should not be taken as a criticism of these wonderful tools. It might, however, make us consider why we find ourselves inflating content. At best we’re implicitly rewarding obfuscation and time wasting; at worst we’re allowing a lack of clear thinking to be covered up. I think we’ve all known this to be true, but LLMs allow us to see the full extent of this with our own eyes. Perhaps it will encourage us to change!
Yeah, this is the problem. Wealth distribution stopped working sometime in the late 20th century and we're fighting each others for competitive advantages. That's the core of this phenomenon.
No one needs containers full of baby sized left shoes, but proof of work must be shown. So the leathers must be cut and shoes must be sewn, only to be left in the ever growing pile in the backyard. That's kind of wrong.
Huh, this post is not what I thought it would be! Even after the first two paragraphs!
There's a line of thought which states that intelligence rhymes with compression: Identifying patterns allows better prediction, enables better compression of the data.
However, internally, LLMs typically do the opposite: Tokenization and vectorization multiply the bit rate of the input signal. Chain of thought techniques add a lot of extra text, further increasing the bit rate.
My PM said they’d written a bunch of tickets for a project yesterday morning that we hadn’t fully scoped yet. I was pleasantly surprised because I can’t complain if they are going to get ahead of things and start scaffolding tickets.
Of course when I went to read them they were 100% slop. The funniest requirement were progress bars for actions that don’t have progress. The tickets were, even if you assume the requirements weren’t slop, at least 15 points a piece.
But ok maybe with all of these new tools we can respond by implementing these insane requirements. The real problem is what this article is discussing. Each ticket was also 500-700 words. Requirements that boil down to a single if statement were described in prose. While this is hilarious the problem is it makes them harder to understand.
I tried to explain this and they just said “ok fine rewrite them then”. Which I did in maybe 15min because there wasn’t actually much to write.
At this point I’m at a loss for how to even work with people that are so convinced these things will save time because they look at the volume of the output.
Ask an llm for a project plan and they’ll happily throw dates around for each step, when they can’t possibly know how long it will take.
But project plan dates have always been fiction. Getting there faster is an efficiency win.
That said I’ve found that llms are good as interrogators. If used to guide a conversation, research background information and then be explicitly told to tersely outline the steps in something I’ve had very good results.
The software requirements phase is becoming increasingly critical to the development lifecycle and that trend will continue. I have started writing very short tickets and having claude code inflate them, then I polish those. I often include negative prompts at this point so claude may have included "add a progress bar for xyz" and i simply add "do not" in front of those things that do not make sense. The results have been excellent.
> At this point I’m at a loss for how to even work with people that are so convinced these things will save time because they look at the volume of the output.
The same way, presumably, that one used to work with people who would say things like "just look how much code this template system generates for us!" unironically.
The only acceptable response to obvious AI slop - unless it's it's clear it's been heavily reviewed and updated - is to put it back into the AI and ask it for a 1 paragraph summary and work off of that.
Perhaps we should judge the performance of an LLM by how well it can compress arbitrary information. A higher IQ would mean more compression, after all.
Which LLMs perform better or worse will be determined entirely by the scoring formula used and how it penalizes errors. It is not in the nature of an LLM to be capable of lossless compression.
LLM Inflation is an interesting choice of terminology. Like with many things in our contemporary society there is a temptation to assign absolute, quantitative value to everyday concepts but realistically we should know this to be a fallacy. Many concepts actually have no "on paper" value but still manifest significant social value. Marketing is the typical example, and yet we don't refer to advertising as inflation (though maybe we should).
This concept probably applies to lots of work in the "AI" space right now. The idea of using huge amounts of compute to generate lifelike voices for LLMs comes to mind as being recently maligned (something many users may not want). Or people upset about getting AI summaries in search that they didn't ask for. And yet, swaths of capital has been invested in these ideas and perhaps its a worthwhile use of resources. I am not sure personally. Time will tell. But I suspect its more complicated than the author is implying here.
I think the usage of LLMs will push for a societal change in how we communicate.
instead of elongated sentences, we perhaps might start seeing an increase in just communicating through the minimum constructing points of whatever meaning we hope to convey, leaving the presentation work for the LLM on the receiving side
> Bob needs a new computer for his job. In order to obtain a new work computer he has to create a 4 paragraph business case explaining why the new computer will improve his productivity.
Is this situation in any way realistic one? Because the way companies work in my beck of woods, no one wants your 4 paragraph business case essay about computer. Like, it is funny anecdote.
But, in real world, at least in my experience, pretty much everyone preferred short for emails and messages. They would skim the long ones at best, especially in situation that can be boiled down to "Tom wants a new computer and is verbose about it".
You give the concise version to the person who is going to authorise your request. The four paragraph version goes on record for the people that person needs to justfy the descision to, they'll likely declare “I don't see a problem here” without actually reading it which is the intention: they might be more wont to question the shorter version.
No, it's much worse than that. In real life you talk about pages and pages of documents and power points and meetings after meetings if you happen to need a computer/server/configuration that's not in the pre-approved list. (I really wish I was exaggerating. And of course no, not all employers are like this to state the obligatory obvious.)
I saw an interesting argument recently that the reason you get this type of verbose language in corporate settings is that English lacks a formal tense. Apparently it's much less common in languages that have one. But in corporate English the verbosity is used as a signal that you took time to produce the text out of respect for the person you're communicating with.
This of course now gets weird with LLMs because I doubt it can last as a signal of respect for very long when it just means you fed some bullet points to ChatGPT.
>Creating the necessary prose is torturous for most of us, so Bob fires up the LLM du jour, types in “Please create a 4 paragraph long business case for my manager, explaining why I need to replace my old, slow computer” and copies the result into his email.
>Bob’s manager receives 4 paragraphs of dense prose and realises from the first line that he’s going to have to read the whole thing carefully to work out what he’s being asked for and why. Instead, he copies the email into the LLM du jour and types at the start “Please summarise this email for me in one sentence”. The 4 paragraphs are summarised as “The sender needs a new computer as his current one is old and slow and makes him unproductive.”
>something very strange about people writing bullet points, having ChatGPT expand it to a polite email, sending it, and the sender using ChatGPT to condense it into the key bullet points 2:42 PM · Mar 2, 2023 · 1.2M Views
The 4 paragraphs requirement was not introduced 'because LLM'. It was there all along for what just should have been 'gimme 2 -3 bullet points'. They wanted Bob to hold back on requesting the new machine he needed, not by denying his request openly, but by making the process convoluted. Now Bob can cut through the BS, they want to blame the LMM for wasting their time and resources? BS!
> At best we’re implicitly rewarding obfuscation and time wasting; at worst we’re allowing a lack of clear thinking to be covered up.
Most people don't think very clearly. That's why rhetoric is effective. That's why most communication is fluffy social signaling. You can give people great advice and their eyes glaze over because the words didn't fill them with emotion, or something, and they do the exact opposite.
No wonder LLMs get put to work playing that stupid game.
[+] [-] djoldman|7 months ago|reply
> Bob’s manager receives 4 paragraphs of dense prose and realises from the first line that he’s going to have to read the whole thing carefully to work out what he’s being asked for and why. Instead, he copies the email into the LLM.... The 4 paragraphs are summarised as “The sender needs a new computer as his current one is old and slow and makes him unproductive.” The manager approves the request.
"LLM inflation" as a "bad" thing often reflects a "bad" system.
In the case described, the bad system is the expectation that one has to write, or is more likely to obtain a favorable result from writing, a 4 paragraph business case. Since Bob inflates his words to fill 4 paragraphs and the manager deflates them to summarise, it's clear that the 4 paragraph expectation/incentive is the "bad" thing here.
This phenomenon of assigning the cause of "bad" things to LLMs is pretty rife.
In fact, one could say that the LLM is optimizing given the system requirement: it's a lot easier to get around this bad framework.
[+] [-] Aransentin|7 months ago|reply
This might have been a genuinely useful system, something which broke down with the existence of LLMs.
[+] [-] patcon|7 months ago|reply
I've been aware of similar dynamic in politics, where the collective action/intelligence of the internet destroyed all the old signals politicians used to rely on. Emails don't mean anything like letters used to mean. Even phone calls are automated now. Your words and experience matter more in a statistical big data sense, rather than individually.
---
This puts me in sci-fi world-building mode, wondering what the absurd extension is... maybe it's just proving burned time investment. So maybe in an imagined world where LLMs are available to all as extensions of thought via neural implant, you can't be taken seriously for even the simplest direct statements unless you prove your mind sat and did nothing (aka wasted it's time) for some arbitrary period of time. So if you sat in the corner and registered inactive boredom for 2h, and attached a non-renewable proof of that to a written word, then people would take your perspective seriously, because you expended (though not "gave") your limited attention/time to the request for some significant amount of time
[+] [-] hdgvhicv|7 months ago|reply
[+] [-] danielbln|7 months ago|reply
[+] [-] otterley|7 months ago|reply
[+] [-] unknown|7 months ago|reply
[deleted]
[+] [-] reginald78|7 months ago|reply
[+] [-] verbify|7 months ago|reply
1) For pedagogical or explanatory purposes. For example, if I were to write:
> ∀x∈R,x^2≥0
I've used 10 characters to say
> For every real number x, it's square is greater than or equal to zero
For a mathematician, the first is sufficient. For someone learning, the second might be better (and perhaps as expansion of 'real number' or that 'square' is 'multiplying it by itself').
2) To make sure everything is stated and explicit. "He finally did x" implies that something has been anticipated/worked on for awhile, but "after a period of anticipation he did x" makes it more clear. This also raises the question of who was anticipating, which could be made explicit too.
As someone who spends a lot of time converting specifications to code (and explaining technical problems to non-technical people), unstated assumptions are very prevalent. And then sometimes people have different conceptions of the unstated assumption (i.e. some people might think that nobody was anticipating, it just took longer than you'd expect otherwise).
So longer text might seem like a simple expansion, but then it ends up adding detail.
I definitely agree with the authors point, I just want to argue that having a text-expander tool isn't quite as useless as 'generate garbage for me'.
[+] [-] nemomarx|7 months ago|reply
[+] [-] msgodel|7 months ago|reply
[+] [-] nathan_compton|7 months ago|reply
It still surprises me when I see non-technical enthusiasts get excited about LLMs drafting almost useless copy or email or whatever. So much garbage text no one reads but has to be written for some reason. Its weird.
[+] [-] integralid|7 months ago|reply
When writing something I want people to read, I always take time at the end to make it shorter - remove distracting sentences, unnecessary adjectives and other noise. Really works wonders for team communication.
[+] [-] nicbou|7 months ago|reply
You are so right! LLMs produce so much noise. If you ask them to be concise, they struggle to cut just the fat, and the output is often vague or misleading. I see that again and again when I ask it to produce different versions of a sentence.
I imagine it's how artists feel about AI art. It seems right at first glance, but you can tell that no thought or craftsmanship went into it.
[+] [-] Gigachad|7 months ago|reply
What incredible technology.
[+] [-] unglaublich|7 months ago|reply
Inference is then the decompression stage where it generates text from the input prompt and the compressed model.
Now that compressing and decompressing texts is trivial with LLMs, we humans should focus - in business at least - on communicating only the core of what we want to say.
If the argument to get a new keyboard is: "i like it", then this should suffice, for inflated versions of this argument can be trivially generated.
[+] [-] AIPedant|7 months ago|reply
What a horrible technology.
[+] [-] onlyrealcuzzo|7 months ago|reply
This seems like exactly what LLMs are supposed to be good at, according to you, so why don't they just near-losslessly compress the data first, and then train on that?
Also, if they're so good at this, then why are their answers often long-winded and require so much skimming to get what I want?
I'm skeptical LLMs are accurately described as "near lossless de/compression engines".
If you change the temperature settings, they can get quite creative.
They are their algorithm, run on their inputs, which can be roughly described as a form of compression, but it's unlike the main forms of compression we think of - and it at least appears to have emergent decompression properties we aren't used to.
If you up the lossy-ness on a JPEG, you don't really end up with creative outputs. Maybe you do by coincidence, and maybe you only do with LLMs - but at much higher rates.
Whatever is happening does not seem to be what I think people typically associate with simple de/compression.
Theoretically, you can train an LLM on all of Physics, except a few things, and it could discover the missing pieces through reasoning.
Yeah, maybe a JPEG could, too, but the odds of that seem astronomically lower.
[+] [-] tomrod|7 months ago|reply
[+] [-] 1980phipsi|7 months ago|reply
[+] [-] nlawalker|7 months ago|reply
If trust was higher, shorter documents would be more desirable. If trust was lower, or accountability higher, summarization would be used a lot more carefully.
LLMs haven't changed anything in this regard except that they've made it extremely easy to abuse trust at that specific level. The long-term result will be that trust will fall in the general case, and people will eventually become more careful about using summarization. I don't think it will be long before productized AI used in business contexts will be pretrained/fine-tuned to perform a basic level of AI content detection or include a qualitative measure of information density by default when performing summarization.
[+] [-] ruuda|7 months ago|reply
[+] [-] hangonhn|7 months ago|reply
[+] [-] numpad0|7 months ago|reply
Yeah, this is the problem. Wealth distribution stopped working sometime in the late 20th century and we're fighting each others for competitive advantages. That's the core of this phenomenon.
No one needs containers full of baby sized left shoes, but proof of work must be shown. So the leathers must be cut and shoes must be sewn, only to be left in the ever growing pile in the backyard. That's kind of wrong.
[+] [-] sdenton4|7 months ago|reply
There's a line of thought which states that intelligence rhymes with compression: Identifying patterns allows better prediction, enables better compression of the data.
However, internally, LLMs typically do the opposite: Tokenization and vectorization multiply the bit rate of the input signal. Chain of thought techniques add a lot of extra text, further increasing the bit rate.
[+] [-] roxolotl|7 months ago|reply
Of course when I went to read them they were 100% slop. The funniest requirement were progress bars for actions that don’t have progress. The tickets were, even if you assume the requirements weren’t slop, at least 15 points a piece.
But ok maybe with all of these new tools we can respond by implementing these insane requirements. The real problem is what this article is discussing. Each ticket was also 500-700 words. Requirements that boil down to a single if statement were described in prose. While this is hilarious the problem is it makes them harder to understand.
I tried to explain this and they just said “ok fine rewrite them then”. Which I did in maybe 15min because there wasn’t actually much to write.
At this point I’m at a loss for how to even work with people that are so convinced these things will save time because they look at the volume of the output.
[+] [-] kasey_junk|7 months ago|reply
But project plan dates have always been fiction. Getting there faster is an efficiency win.
That said I’ve found that llms are good as interrogators. If used to guide a conversation, research background information and then be explicitly told to tersely outline the steps in something I’ve had very good results.
[+] [-] waynenilsen|7 months ago|reply
[+] [-] zahlman|7 months ago|reply
The same way, presumably, that one used to work with people who would say things like "just look how much code this template system generates for us!" unironically.
[+] [-] SoftTalker|7 months ago|reply
[+] [-] dimitri-vs|7 months ago|reply
[+] [-] santiagobasulto|7 months ago|reply
isn't this the opposite? Enabling compression will INCREASE the load on your server as you need more CPU to compress/decompress the data.
[+] [-] amelius|7 months ago|reply
[+] [-] a_shovel|7 months ago|reply
[+] [-] Legend2440|7 months ago|reply
Prediction is formally equivalent to compression, so loss is just a measure of how well you can compress the training dataset.
[+] [-] optimalsolver|7 months ago|reply
[+] [-] jnmandal|7 months ago|reply
This concept probably applies to lots of work in the "AI" space right now. The idea of using huge amounts of compute to generate lifelike voices for LLMs comes to mind as being recently maligned (something many users may not want). Or people upset about getting AI summaries in search that they didn't ask for. And yet, swaths of capital has been invested in these ideas and perhaps its a worthwhile use of resources. I am not sure personally. Time will tell. But I suspect its more complicated than the author is implying here.
[+] [-] dspillett|7 months ago|reply
[+] [-] amdivia|7 months ago|reply
instead of elongated sentences, we perhaps might start seeing an increase in just communicating through the minimum constructing points of whatever meaning we hope to convey, leaving the presentation work for the LLM on the receiving side
[+] [-] watwut|7 months ago|reply
Is this situation in any way realistic one? Because the way companies work in my beck of woods, no one wants your 4 paragraph business case essay about computer. Like, it is funny anecdote.
But, in real world, at least in my experience, pretty much everyone preferred short for emails and messages. They would skim the long ones at best, especially in situation that can be boiled down to "Tom wants a new computer and is verbose about it".
[+] [-] dspillett|7 months ago|reply
[+] [-] teemur|7 months ago|reply
No, it's much worse than that. In real life you talk about pages and pages of documents and power points and meetings after meetings if you happen to need a computer/server/configuration that's not in the pre-approved list. (I really wish I was exaggerating. And of course no, not all employers are like this to state the obligatory obvious.)
[+] [-] jdoliner|7 months ago|reply
This of course now gets weird with LLMs because I doubt it can last as a signal of respect for very long when it just means you fed some bullet points to ChatGPT.
[+] [-] adgjlsfhk1|7 months ago|reply
[+] [-] rafram|7 months ago|reply
[+] [-] jasode|7 months ago|reply
>Bob’s manager receives 4 paragraphs of dense prose and realises from the first line that he’s going to have to read the whole thing carefully to work out what he’s being asked for and why. Instead, he copies the email into the LLM du jour and types at the start “Please summarise this email for me in one sentence”. The 4 paragraphs are summarised as “The sender needs a new computer as his current one is old and slow and makes him unproductive.”
Sam Altman actually had a concise tweet about this blog's topic (https://x.com/sama/status/1631394688384270336)
>something very strange about people writing bullet points, having ChatGPT expand it to a polite email, sending it, and the sender using ChatGPT to condense it into the key bullet points 2:42 PM · Mar 2, 2023 · 1.2M Views
[+] [-] PeterStuer|7 months ago|reply
The 4 paragraphs requirement was not introduced 'because LLM'. It was there all along for what just should have been 'gimme 2 -3 bullet points'. They wanted Bob to hold back on requesting the new machine he needed, not by denying his request openly, but by making the process convoluted. Now Bob can cut through the BS, they want to blame the LMM for wasting their time and resources? BS!
[+] [-] TOGoS|7 months ago|reply
Most people don't think very clearly. That's why rhetoric is effective. That's why most communication is fluffy social signaling. You can give people great advice and their eyes glaze over because the words didn't fill them with emotion, or something, and they do the exact opposite.
No wonder LLMs get put to work playing that stupid game.