This is a good statement of what I suspect many of us have found when rejecting the rewriting advice of AIs. The "pointiness" of prose gets worn away, until it doesn't say much. Everything is softened. The distinctiveness of the human voice is converted into blandness. The AI even says its preferred rephrasing is "polished" - a term which specifically means the jaggedness has been removed.
But it's the jagged edges, the unorthodox and surprising prickly bits, that tear open a hole in the inattention of your reader, that actually gets your ideas into their heads.
I think that mostly depends on how good a writer you are. A lot of people aren't, and the AI legitimately writes better. As in, the prose is easier to understand, free of obvious errors or ambiguities.
But then, the writing is also never great. I've tried a couple of times to get it to write in the style of a famous author, sometimes pasting in some example text to model the output on, but it never sounds right.
I think it’s essential to realize that AI is a tool for mainstream tasks like composing a standard email and not for the edges.
The edges are where interesting stuff happens. The boring part can be made more efficient. I don’t need to type boring emails, people who can’t articulate well will be elevated.
It’s the efficient popularization of the boring stuff. Not much else.
> But it's the jagged edges, the unorthodox and surprising prickly bits, that tear open a hole in the inattention of your reader, that actually gets your ideas into their heads.
This brings to mind what I think is a great description of the process LLMs exert on prose: sanding.
It's an algorithmic trend towards the median, thus they are sanding down your words until they're a smooth average of their approximate neighbors.
no but its bad writing It repeats information, It adds superfluous stuff, doesnt produce more specific forms of saying things, you are making It sounds like its "too perfect" when its bland because its artificial dumbness not artificial intelligence
Well said. In music, it's very similar. The jarring, often out of key tones are the ones that are the most memorable, the signatures that give a musical piece its uniqueness and sometimes even its emotional points. I don't think it's possible for AI to ever figure this out, because there's something about being human that is necessary to experiencing or even describing it. You cannot "algorithmize" the unspoken.
I see it on recent blog posts, on news articles, obituaries, YT channels. Sometimes mixed with voice impersonation of famous physicists like Feynman or Susskind.
I find it genuinely soul-crushing and even depressing, but I may be over sensitive to it as most readers don't seem to notice.
I find it extremely difficult to focus on any piece of writing the moment I see the patterns. Can’t tell if it’s an attitude problem I need to get over or if it’s just that all AI writing really is that bad.
same. it is showing how many people are not trying to participate - just appear to. I want to read from and write for my peers, but it seems we are just awash with fakers
It's almost disgusting to me tbh, for the first time I find it actually easy to unplug and go do offline things, whatever I want to explore online is hidden behind a forest of synth slop I can't even bother looking at anymore
I personally think “generative AI” is a misnomer. More I understand the mathematics behind machine learning more I am convinced that it should not be used to generate text, images or anything that is meant for people to consume, even if it is the most blandest of email. Sometimes you might get lucky, but most of the time you only get what the most boring person in the most boring cocktail party would say if forced to be creative with a gun pointed to his head. It can help in multitude of other ways, help human in the creative process itself, but generating anything even mildly creative by itself… I’ll pass.
Precisely. If companies would just focus on what it could be good at - deductive search, coding boilerplate with assistance, etc. then it would be a great tool. Instead you have dario, altman, and co. trying to pump stock and give us more spaghetti agents.
He ended a critical commentary by suggesting that the author he was responding to should think more critically about the topic rather than repeating falsehoods because "they set off the tuning fork in the loins of your own dogmatism."
> "they set off the tuning fork in the loins of your own dogmatism."
Eh... I don't know. To me, that sounds very AI-ish.
Claude is very good -- at times -- coming up with flowery metaphoric language... if you tell it to. That one is so over-the-top that I'd edit it out.
Put something like this in your prompt and have it revise something:
"Make this read like Jim Thompson crossed with Thomas Harris, filtered through a paperback rack at a truck stop circa 1967. Make it gritty, efficient, and darkly comedic. Don't shy away from suggesting more elegant words or syntax. (For instance, Robert Howard -- Conan -- and H.P. Lovecraft were definitely pulp, but they had a sophisticated vocabulary.) I really want some purple prose and overwrought metaphors."
Occasionally you'll get some gems. Claude is much better than ChatGPT at this kinda stuff. The BEST ones are the ever-growing NSFW models populating huggingface.
In short, do the posts on OpenClawForum all sound alike? Of course.
Just like all the webpages circa 2000 looked alike. The uniformity wasn't because of HTML... rather it was because few people were using HTML to its full potential.
Yes I noticed this as well. I was last writing up a landing page for our new studio. Emotion filled. Telling a story. I sent it through grok to improve it. It removed all of the character despite whatever prompt I gave. I'm not a great writer, but I think those rough edges are necessary to convey the soul of the concept. I think AI writing is better used for ideation and "what have I missed?" and then write out the changes yourself.
I've found LLMs to be terrible with ideation. I've been using GPT 5.x to come up with ideas and plot lines for a Dungeon World campaign I've been running.
I'm no fantasy author, and my prose leaves much to be desired. The stuff the LLM comes up with is so mind numbingly bland. I've given up on having it write descriptions of any characters or locations. I just use it for very general ideas and plot lines, and then come up with the rest of the details on the fly myself. The plot lines and ideas it comes up with are very generic and bland. I mainly do it just to save time, but I throw away 50% of the "ideas" because they make no sense or are really lame.
What i have found LLMs to be helpful with is writing up fun post-session recaps I share with the adventurers.
I recap in my own words what happened during the session, then have the LLM structure it into a "fun to read" narrative style. ChatGPT seems to prefer a Sanderson jokey tone, but I could probably tailor this.
Then I go through it, and tweak some of the boring / bland bits. The end result is really fun to read, and took 1/20th the time it would have taken me to write it all out myself. The LLM would have never been able to come up with the unique and fun story lines, but it is good at making an existing story have some narrative flare in a short amount of time.
YES this hits the nail on something I've been trying to express for some time now. Semantic ablation: love it, going to use that a lot not now when arguing why someone's ChatGPT-washed email sucks.
Semantic ablation is also why I'm doubtful of everyone proclaiming that Opus 4 would be AGI if we just gave it the right agent harness and let all the agents run free on the web. In reality they would distill it to a meaningless homogeneous stew.
> We are witnessing a civilizational "race to the middle," where the complexity of human thought is sacrificed on the altar of algorithmic smoothness.
This has long been the case in the area of "business English", which has become highly simplified to fulfill several concurrent, yet conflicting requirements:
- Generally understandable to a wide audience due its lingua franca status
- "Media-trained" to not let internal details slip or admit fault to the public
- "Executive Summary"-fied to provide the coveted "30k ft view" to detail-allergic senior leadership
Considering how heavily weighted language training models are towards corporate press releases, general-audience news media and SEO-optimized blogspam, AI English is quickly going to become an even more blurry photocopy of business English.
This isn't new to AI. The same kind of thing happens in movie test screenings, or with autotune. If something is intended for a large audience, there's always an incentive to remove the weird stuff.
> What began as a jagged, precise Romanesque structure of stone is eroded into a polished, Baroque plastic shell
Not to detract from the overall message, but I think the author doesn't really understand Romanesque and Baroque.
(as an aside, I'd most likely associate Post-Modernism as an architectural style with the output of LLMs - bland, regurgitative, and somewhat incongruous)
For example the anthropic Frontend Design skill instructs:
"Typography: Choose fonts that are beautiful, unique, and interesting. Avoid generic fonts like Arial and Inter; opt instead for distinctive choices that elevate the frontend's aesthetics; unexpected, characterful font choices. Pair a distinctive display font with a refined body font."
Or
"NEVER use generic AI-generated aesthetics like overused font families (Inter, Roboto, Arial, system fonts), cliched color schemes (particularly purple gradients on white backgrounds), predictable layouts and component patterns, and cookie-cutter design that lacks context-specific character." 1
Maybe sth similar would be possible for writing nuances.
> "NEVER use generic AI-generated aesthetics like overused font families (Inter, Roboto, Arial, system fonts), cliched color schemes (particularly purple gradients on white backgrounds), ...
Now, imagine what happens when this prompt becomes popular?
Keep in mind that LLMs are trying to predict the most likely token. If your prompt prohibits the most likely token, they output the next most likely token. So, attempts to force creativity by prohibiting cliches just create another cliche.
Several days ago, someone researched Moltbook and pointed out how similar all the posts are. Something like 10% of them say "my human", etc.
This article on AI writing being boring seems to be written by AI. The em dashes and the sentence structure, all seems to be AI output. Or have human started adopting this style too.
If the “amount of semantic ablation” in a generated phrase/sentence/paragraph can be measured and compared, then a looped process (an agent) could be built that tries to decrease that..
It might come up with something original - I mean there has to be tons of interesting connections in the training data that no one’s seen before.
I'd like to see some concrete examples that illustrate this - as it stands this feels like an opinion piece that doesn't attempt to back up its claims.
(Not necessarily disagreeing with those claims, but I'd like to see a more robust exploration of them.)
Have you not seen it any time you put any substantial bit of your own writing through an LLM, for advice?
I disagree pretty strongly with most of what an LLM suggests by way of rewriting. They're absolutely appalling writers. If you're looking for something beyond corporate safespeak or stylistic pastiche, they drain the blood out of everything.
The skin of their prose lacks the luminous translucency, the subsurface scattering, that separates the dead from the living.
Look through my comment history at all the posts where I complain the author might have had something interesting to say but it's been erased by the LLM and you can no longer tell what the author cared about because the entire post is a an oversold monotone advertising voice.
I just sent TFA to a colleague of mine who was experimenting with llm's for auto-correcting human-written text, since she noticed the same phenomenon where it would correct not only mistakes, but slightly nudge words towards more common synonyms. It would often lose important nuances, so "shun" would be corrected to "avoid", and "divulge" would become "disclose" etc.
It is an opinion piece. By a dude working as a "Professor of Pharmaceutical Technology and Biomaterials at the University of Ferrara".
It has all the tropes of not understanding the underlying mechanisms, but repeating the common tropes. Quite ironic, considering what the author's intended "message" is. Jpeg -> jpeg -> jpeg bad. So llm -> llm -> llm must be bad, right?
It reminds me of the media reception of that paper on model collapse. "Training on llm generated data leads to collapse". That was in 23 or 24? Yet we're not seeing any collapse, despite models being trained mainly on synthetic data for the past 2 years. That's not how any of it works. Yet everyone has an opinion on how bad it works. Jesus.
It's insane how these kinds of opinion pieces get so upvoted here, while worth-while research, cool positive examples and so on linger in new with one or two upvotes. This has ceased to be a technical subject, and has moved to muh identity.
A lot of times, this entropy decay is found in semantic or stylistic space, which would be hard to detect (you couldn't use, e.g., Shannon Entropy). You'd have to ask questions like "is this point uninteresting?" or "is this trope overused?"--bad (human) writers are often guilty of this too, so that's why AI can be hard to detect.
Isn't this more to do with how LLMs are trained for general purpose use? Are LLMs with a specific use and dataset in mind better? Like if the dataset was fiction novels, would it sound more booky? If it was social-media, would it sound more click-baity and engaging?
I've had AI be boring, but I've also seen things like original jokes that were legitimately funny. Maybe it's the prompts people use, it doesn't give it enough of a semantic and dialectic direction to not be generic. IRL, we look at a person and get a feel for them and the situation to determine those things.
I wonder why AI labs have not worked on improving the quality of the text outputs. Is this as the author claims a property of the LLMs themselves? Or is there simply not much incentive to create the best writing LLM?
I remember an article a few weeks back[1] which mentioned the current focus is improving the technical abilities of LLMs. I can imagine many (if not most) of their current subscribers are paying for the technical ability as opposed to creative writing.
This also reminded me that on OpenRouter, you can sort models by category. The ones tagged "Roleplay" and "Marketing" are probably going to have better writing compared to models like Opus 4 or ChatGPT 5.2.
That's like asking why McDonald's doesn't improve the quality of their hamburger. They can, but only within the bounds of mass produced cheap crap that maximizes profit. Otherwise they'd be a fundamentally different kind of company.
I mean there's tons of better-writing tools that use AI like Grammarly etc. For actual general-purpose LLMs, I don't think there's much incentive in making it write "better" in the artistic sense of the world... if the idea is to make the model good at tasks in general and communicate via language, that language should sound generic and boring. If it's too artistic or poetic or novel-like, the communication would appear a bit unhinged.
"Update the dependencies in this repo"
"Of course, I will. It will be an honor, and may I say, a beautiful privilege for me to do so. Oh how I wonder if..." vrs "Okay, I'll be updating dependencies..."
As a writer who has been published many times and edited many other writers for publication... It seems like AI can't make stylistic determinations. It is generally good with spelling and grammar but the text it generates is very homogeneous across formats. It's readable but it's not good, and always full of fluff like an online recepie harvesting clicks. It's kind of crap really. If you just need filler it's ok, but if you want something pleasand you definitely still need a human.
All these forced metaphors and clumsy linguistic flourishes made me cringe. Just add some typos and grammar mistakes like the rest of us to prove that your human.
Great article and exactly why I use AI less and less. I basically find it to be rotting my brain towards the middle of the distribution. It's like all the nuance and critical thinking that actually goes into things gets stripped out.
Once a company perfects an agent that essentially performs condensed search and coding boilerplate making, that is probably where LLMs end for me. Perplexity and Claude are on the right track but not at all close.
> The AI identifies unconventional metaphors or visceral imagery as "noise" because they deviate from the training set's mean.
That's certainly a take. In the translation industry (the primogenitor and driver for much of the architecture and theory of LLMs) they're known for making extremely unconventional choices to such a degree that it actively degrades the quality of translation.
Isn't image generation basically doing "anti semantic ablation", starting with a blank canvas and iteratively refining it into a meaningful collection of pixels?
Is it possible to do the same thing with word generation, such that it sharpens into an opinionated version (even if it would do something different each time?)
Because you simply can't engineer creativity. Maybe you can describe where it comes from, in a circuitous, abstract way with mathematics (and ultimately run face first into ħ and then run in circles for eternity). But to engineer it, you'd have to start over from the first principles of the stuff of the cosmos. One's a map and the other the territory.
The article itself reads as an AI generated output, complete with classic Not Just X … Y hallmarks from forever ago, 100% on pangram's low false positive detector. I'm not sure if it's some experiment on their readerbase or what.
pangram result: https://www.pangram.com/history/02bead1c-c36e-461b-8fa7-8699...
So many AI generated AI bashing articles lately. I wrote a post complaining about running into these, and asking people who've sent me these AI articles multiple of them came from HN. https://lunnova.dev/articles/ai-bashing-ai-slop/
I'm not the only one who noticed that. I do creative writing with AI, so I spend hours reading ai generated text, and it feels ai generated or at least heavily ai assisted to me too, ironic, thanks for the link, glad there's something other than vibe
> The model performs a statistical substitution, replacing a 1-of-10,000 token with a 1-of-100 synonym
Do we see this in programming too? I don't think so? Unique, rarely used API methods aren't substituted the same way when refactoring. Perhaps that could give us a clue on how to fix that?
I kind of think of that as just increasing the standard deviation. Its been a while since I experimented with this, but I remember trying a temp of 1 and the output was gibberish, like base64 gibberish. So something like 0.5 doesn't necessarily seem to solve this problem, it just flattens the distribution and makes the output less coherent, with rarer tokens, but still the same underlying distribution.
you have to know that your "simply" is carrying too much weight. here's some examples of why just temperature is not enough, you need to run active world models https://www.latent.space/p/adversarial-reasoning
I think they can fix all that but they can't fix the fact that the computer has no intention to communicate. They could imbue it with agency to fix that too, but I much prefer it the way things are.
Those transformations happen to mirror what happens to human intelligence when you take antipsychotics. Please know the risks before taking them. They are innumerable and generally irreversible.
Yea. It is pretty sad. Even if we think about the code we’re writing. It’s so… average. No clever solutions, no funny mistakes, no character. Just average.
In Essays in the Art of Writing (1), Robert Louis Stevenson says:
"And perhaps there is no subject on which a man should speak so gravely as that industry, whatever it may be, which is the occupation or delight of his life; which is his tool to earn or serve with; and which, if it be unworthy, stamps himself as a mere incubus of dumb and greedy bowels on the shoulders of labouring humanity. On that subject alone even to force the note might lean to virtue’s side. It is to be hoped that a numerous and enterprising generation of writers will follow and surpass the present one; but it would be better if the stream were stayed, and the roll of our old, honest English books were closed, than that esurient book-makers should continue and debase a brave tradition, and lower, in their own eyes, a famous race. Better that our serene temples were deserted than filled with trafficking and juggling priests."
And in the first essay, speaking on matters of style:
"The conjurer juggles with two oranges, and our pleasure in beholding him springs from this, that neither is for an instant overlooked or sacrificed. So with the writer. His pattern, which is to please the supersensual ear, is yet addressed, throughout and first of all, to the demands of logic. Whatever be the obscurities, whatever the intricacies of the argument, the neatness of the fabric must not suffer, or the artist has been proved unequal to his design. And, on the other hand, no form of words must be selected, no knot must be tied among the phrases, unless knot and word be precisely what is wanted to forward and illuminate the argument; for to fail in this is to swindle in the game. The genius of prose rejects the cheville no less emphatically than the laws of verse; and the cheville, I should perhaps explain to some of my readers, is any meaningless or very watered phrase employed to strike a balance in the sound. Pattern and argument live in each other; and it is by the brevity, clearness, charm, or emphasis of the second, that we judge the strength and fitness of the first."
AI doesn't "write" in the sense used above. It has no ear, no wit, no soul. "A reflection of a mind is not a mind", as Phillip Ball writes in "AI Is the Black Mirror" (2).
As someone longtime involved in software development, can we call this "best practices" instead of some like "semantic ablation" that nobody understands?
Going off search results, it seems to be a new coinage. I found mostly references to TFA, along with an (ironically obviously AI-written) guide with suggestions for getting LLMs to avoid the issue (just generic "traditional" advice for tuning their output, really). The guide was apparently published today, and I imagine that it's a deliberate response to TFA. But FWIW the term "semantic ablation" does seem to me like something that newer models could invent
At any rate, it seems to me like a reasonable label for what's described:
> Semantic ablation is the algorithmic erosion of high-entropy information. Technically, it is not a "bug" but a structural byproduct of greedy decoding and RLHF (reinforcement learning from human feedback).
> ...
> When an author uses AI for "polishing" a draft, they are not seeing improvement; they are witnessing semantic ablation.
The metaphor is very apt. Literal polishing is removal of outer layers. Compared to the near-synonym "erosion", "ablation" connotes a deliberate act (ordinarily I would say "conscious", but we are talking about LLMs here). Often, that which is removed is the nuance of near-synonyms (there is no pause to consider whether the author intended that nuance). I don't know if the "character" imparted by broader grammatical or structural choices can be called "semantic", but that also seems like a big part of what goes missing in the "LLM house style".
Bluntly: getting AI to "improve" writing, as a fully generic instruction, is naturally going to pull that writing towards how the AI writes by default. Because of course the AI's model of "writing quality" considers that style to be "the best"; that's why it uses it. (Even "consider" feels like anthropomorphizing too much; I feel like I'm hitting the limits of English expressiveness here.)
Nonsense. I’ve written bland prose for a story and AI made it much better by revising it with a prompt such as this: “Make the vocabulary and grammar more sophisticated and add in interesting metaphors. Rewrite it in the style of a successful literary author.”
Meh. Semantic Ablation - but toward a directed goal. If I say "How would Hemingway have said this, provided he had the same mindset he did post-war while writing for Collier's?"
Then the model will look for clusters that don't fit what the model consider's to be Hemingway/Colliers/Post-War and suggest in that fashion.
"edit this" -> blah
"imagine Tom Wolfe took a bunch of cocaine and was getting paid by the word to publish this after his first night with Aline Bernstein" -> probably less blah
These kinds of prompts don’t really improve the writing IME. It still gets riddled with the same tropes and phrases, or it veers off into textual vomit.
Another common LLMism is elegant variation, i.e., avoiding repetition of words by using synonyms. (I assume they're RLed to do this religiously.) Of course, human writers do this too, but not nearly to the same extent in my experience. There's nothing wrong with repeating a word, especially in a formal text.
the word choice here is so obtuse as to trigger my radar for "is this some kind of parody where this itself was AI generated". it appears to be entirely serious, which is disappointing, it could have been high art.
I wonder if you can use lower quality models (or some other non-llm related process) to inject more "noise" into the text in between stages. Of course it wouldn't help retain uniqueness from the original source text, just add more in between.
I’m not convinced removing RLHF would really make the probabilities generator give us distributions that can diverge from the mean while remaining useful.
In other words, this might not a problem that can be overcome in LLMs alone.
The vast majority of people who write don't have a voice worth preserving. The rest can build out a voice document to make sure the AI doesn't strip it out.
barrkel|13 days ago
But it's the jagged edges, the unorthodox and surprising prickly bits, that tear open a hole in the inattention of your reader, that actually gets your ideas into their heads.
svara|13 days ago
But then, the writing is also never great. I've tried a couple of times to get it to write in the style of a famous author, sometimes pasting in some example text to model the output on, but it never sounds right.
baxtr|13 days ago
The edges are where interesting stuff happens. The boring part can be made more efficient. I don’t need to type boring emails, people who can’t articulate well will be elevated.
It’s the efficient popularization of the boring stuff. Not much else.
folbec|13 days ago
He lacks (or lost thru disuse) technical expertise on the subject, so he uses more and more fuzzy words, leaky analogies, buzzwords.
This maybe why AI generated content has so much success among leaders and politicians.
devmor|13 days ago
This brings to mind what I think is a great description of the process LLMs exert on prose: sanding.
It's an algorithmic trend towards the median, thus they are sanding down your words until they're a smooth average of their approximate neighbors.
gdulli|13 days ago
DuperPower|12 days ago
johnnienaked|12 days ago
piker|12 days ago
amelius|13 days ago
stephc_int13|13 days ago
I see it on recent blog posts, on news articles, obituaries, YT channels. Sometimes mixed with voice impersonation of famous physicists like Feynman or Susskind.
I find it genuinely soul-crushing and even depressing, but I may be over sensitive to it as most readers don't seem to notice.
matusp|13 days ago
Maybe I'm going crazy but I can smell it in the OP as well.
vessenes|13 days ago
doomslayer999|12 days ago
And, the worst part is noone will ever make a new internet because of the founder effect. We are basically in the worst timeline.
causal|13 days ago
vpribish|13 days ago
lm28469|12 days ago
delis-thumbs-7e|13 days ago
ses1984|13 days ago
I would rather read the prompt than the generative output, even if it’s just disjointed words and sentence fragments.
pimlottc|13 days ago
doomslayer999|12 days ago
Terretta|13 days ago
don't be mean, it's median AI à la mode
tasty_freeze|13 days ago
https://youtu.be/605MhQdS7NE?si=IKMNuSU1c1uaVCDB&t=730
He ended a critical commentary by suggesting that the author he was responding to should think more critically about the topic rather than repeating falsehoods because "they set off the tuning fork in the loins of your own dogmatism."
Yeah, AI could not come up with that phrase.
raincole|12 days ago
Sounds like word salad. Of course if you write like GPT-2 it would not sound like current models.
lurquer|12 days ago
Eh... I don't know. To me, that sounds very AI-ish.
Claude is very good -- at times -- coming up with flowery metaphoric language... if you tell it to. That one is so over-the-top that I'd edit it out.
Put something like this in your prompt and have it revise something:
"Make this read like Jim Thompson crossed with Thomas Harris, filtered through a paperback rack at a truck stop circa 1967. Make it gritty, efficient, and darkly comedic. Don't shy away from suggesting more elegant words or syntax. (For instance, Robert Howard -- Conan -- and H.P. Lovecraft were definitely pulp, but they had a sophisticated vocabulary.) I really want some purple prose and overwrought metaphors."
Occasionally you'll get some gems. Claude is much better than ChatGPT at this kinda stuff. The BEST ones are the ever-growing NSFW models populating huggingface.
In short, do the posts on OpenClawForum all sound alike? Of course.
Just like all the webpages circa 2000 looked alike. The uniformity wasn't because of HTML... rather it was because few people were using HTML to its full potential.
IncreasePosts|13 days ago
co_king_5|13 days ago
[deleted]
rorylaitila|13 days ago
gnutrino|13 days ago
I'm no fantasy author, and my prose leaves much to be desired. The stuff the LLM comes up with is so mind numbingly bland. I've given up on having it write descriptions of any characters or locations. I just use it for very general ideas and plot lines, and then come up with the rest of the details on the fly myself. The plot lines and ideas it comes up with are very generic and bland. I mainly do it just to save time, but I throw away 50% of the "ideas" because they make no sense or are really lame.
What i have found LLMs to be helpful with is writing up fun post-session recaps I share with the adventurers.
I recap in my own words what happened during the session, then have the LLM structure it into a "fun to read" narrative style. ChatGPT seems to prefer a Sanderson jokey tone, but I could probably tailor this.
Then I go through it, and tweak some of the boring / bland bits. The end result is really fun to read, and took 1/20th the time it would have taken me to write it all out myself. The LLM would have never been able to come up with the unique and fun story lines, but it is good at making an existing story have some narrative flare in a short amount of time.
co_king_5|13 days ago
[deleted]
causal|13 days ago
Semantic ablation is also why I'm doubtful of everyone proclaiming that Opus 4 would be AGI if we just gave it the right agent harness and let all the agents run free on the web. In reality they would distill it to a meaningless homogeneous stew.
co_king_5|13 days ago
[deleted]
rchaud|12 days ago
This has long been the case in the area of "business English", which has become highly simplified to fulfill several concurrent, yet conflicting requirements:
- Generally understandable to a wide audience due its lingua franca status
- "Media-trained" to not let internal details slip or admit fault to the public
- "Executive Summary"-fied to provide the coveted "30k ft view" to detail-allergic senior leadership
Considering how heavily weighted language training models are towards corporate press releases, general-audience news media and SEO-optimized blogspam, AI English is quickly going to become an even more blurry photocopy of business English.
Espressosaurus|13 days ago
It wanted to replace all the little bits of me that were in there.
crabmusket|12 days ago
https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing
While the page's purpose is to help editors detect AI contributions, you can also detect yourself doing these same things sometimes, and fix them.
ranprieur|13 days ago
somewhereoutth|13 days ago
Not to detract from the overall message, but I think the author doesn't really understand Romanesque and Baroque.
(as an aside, I'd most likely associate Post-Modernism as an architectural style with the output of LLMs - bland, regurgitative, and somewhat incongruous)
morgengold|13 days ago
For example the anthropic Frontend Design skill instructs:
"Typography: Choose fonts that are beautiful, unique, and interesting. Avoid generic fonts like Arial and Inter; opt instead for distinctive choices that elevate the frontend's aesthetics; unexpected, characterful font choices. Pair a distinctive display font with a refined body font."
Or
"NEVER use generic AI-generated aesthetics like overused font families (Inter, Roboto, Arial, system fonts), cliched color schemes (particularly purple gradients on white backgrounds), predictable layouts and component patterns, and cookie-cutter design that lacks context-specific character." 1
Maybe sth similar would be possible for writing nuances.
1 https://github.com/anthropics/skills/blob/main/skills/fronte...
lich_king|13 days ago
Now, imagine what happens when this prompt becomes popular?
Keep in mind that LLMs are trying to predict the most likely token. If your prompt prohibits the most likely token, they output the next most likely token. So, attempts to force creativity by prohibiting cliches just create another cliche.
Several days ago, someone researched Moltbook and pointed out how similar all the posts are. Something like 10% of them say "my human", etc.
causal|12 days ago
conartist6|13 days ago
tayo42|13 days ago
poszlem|13 days ago
co_king_5|13 days ago
[deleted]
lakhotiaharshit|12 days ago
haffi112|12 days ago
meowface|11 days ago
cadamsdotcom|12 days ago
It might come up with something original - I mean there has to be tons of interesting connections in the training data that no one’s seen before.
But maybe it’d just end up shouting at you.
simonw|13 days ago
(Not necessarily disagreeing with those claims, but I'd like to see a more robust exploration of them.)
barrkel|13 days ago
I disagree pretty strongly with most of what an LLM suggests by way of rewriting. They're absolutely appalling writers. If you're looking for something beyond corporate safespeak or stylistic pastiche, they drain the blood out of everything.
The skin of their prose lacks the luminous translucency, the subsurface scattering, that separates the dead from the living.
furyofantares|13 days ago
https://news.ycombinator.com/item?id=46583410#46584336
https://news.ycombinator.com/item?id=46605716#46609480
https://news.ycombinator.com/item?id=46617456#46619136
https://news.ycombinator.com/item?id=46658345#46662218
https://news.ycombinator.com/item?id=46630869#46663276
https://news.ycombinator.com/item?id=46656759#46663322
https://news.ycombinator.com/item?id=46661936#46663362
https://news.ycombinator.com/item?id=46748077#46749699
internet_points|12 days ago
gdulli|13 days ago
Cpl. Barnes: Well, Lt. Kaffee, that's not in the book, sir.
Kaffee: You mean to say in all your time at Gitmo, you've never had a meal?
Cpl. Barnes: No, sir. Three squares a day, sir.
Kaffee: I don't understand. How did you know where the mess hall was if it's not in this book?
Cpl. Barnes: Well, I guess I just followed the crowd at chow time, sir.
Kaffee: No more questions.
NitpickLawyer|13 days ago
It has all the tropes of not understanding the underlying mechanisms, but repeating the common tropes. Quite ironic, considering what the author's intended "message" is. Jpeg -> jpeg -> jpeg bad. So llm -> llm -> llm must be bad, right?
It reminds me of the media reception of that paper on model collapse. "Training on llm generated data leads to collapse". That was in 23 or 24? Yet we're not seeing any collapse, despite models being trained mainly on synthetic data for the past 2 years. That's not how any of it works. Yet everyone has an opinion on how bad it works. Jesus.
It's insane how these kinds of opinion pieces get so upvoted here, while worth-while research, cool positive examples and so on linger in new with one or two upvotes. This has ceased to be a technical subject, and has moved to muh identity.
tpoacher|13 days ago
Is there an easy way to get / compare the entropy of two passages? (e.g. to see if it has indeed dropped after gen ai manipulation).
And could this be used to flag AI-gen text (or at least, boring, soulless sounding text)
dvt|13 days ago
notepad0x90|13 days ago
I've had AI be boring, but I've also seen things like original jokes that were legitimately funny. Maybe it's the prompts people use, it doesn't give it enough of a semantic and dialectic direction to not be generic. IRL, we look at a person and get a feel for them and the situation to determine those things.
resiros|13 days ago
mjamesaustin|13 days ago
zanehelton|13 days ago
This also reminded me that on OpenRouter, you can sort models by category. The ones tagged "Roleplay" and "Marketing" are probably going to have better writing compared to models like Opus 4 or ChatGPT 5.2.
[1]: https://www.techradar.com/ai-platforms-assistants/sam-altman...
add-sub-mul-div|13 days ago
altmanaltman|13 days ago
"Update the dependencies in this repo"
"Of course, I will. It will be an honor, and may I say, a beautiful privilege for me to do so. Oh how I wonder if..." vrs "Okay, I'll be updating dependencies..."
josefritzishere|13 days ago
sieste|12 days ago
doomslayer999|12 days ago
Once a company perfects an agent that essentially performs condensed search and coding boilerplate making, that is probably where LLMs end for me. Perplexity and Claude are on the right track but not at all close.
ux266478|13 days ago
That's certainly a take. In the translation industry (the primogenitor and driver for much of the architecture and theory of LLMs) they're known for making extremely unconventional choices to such a degree that it actively degrades the quality of translation.
mizzao|12 days ago
Is it possible to do the same thing with word generation, such that it sharpens into an opinionated version (even if it would do something different each time?)
52-6F-62|13 days ago
nalllar|13 days ago
So many AI generated AI bashing articles lately. I wrote a post complaining about running into these, and asking people who've sent me these AI articles multiple of them came from HN. https://lunnova.dev/articles/ai-bashing-ai-slop/
Fitik|12 days ago
andai|13 days ago
(Obviously a different question from "is an AI lab willing to release that publicly” ;)
bananaflag|13 days ago
https://nostalgebraist.tumblr.com/post/778041178124926976/hy...
https://nostalgebraist.tumblr.com/post/792464928029163520/th...
lyu07282|13 days ago
Do we see this in programming too? I don't think so? Unique, rarely used API methods aren't substituted the same way when refactoring. Perhaps that could give us a clue on how to fix that?
prerok|12 days ago
When not given a clear guideline to "just" refactor, I have had problems with LLMs hallucinating functions that don't exist.
aleph_minus_one|13 days ago
lbrito|13 days ago
swyx|13 days ago
mannykannot|13 days ago
mwcampbell|12 days ago
esafak|13 days ago
reilly3000|13 days ago
hknceykbx|11 days ago
0x38B|12 days ago
"And perhaps there is no subject on which a man should speak so gravely as that industry, whatever it may be, which is the occupation or delight of his life; which is his tool to earn or serve with; and which, if it be unworthy, stamps himself as a mere incubus of dumb and greedy bowels on the shoulders of labouring humanity. On that subject alone even to force the note might lean to virtue’s side. It is to be hoped that a numerous and enterprising generation of writers will follow and surpass the present one; but it would be better if the stream were stayed, and the roll of our old, honest English books were closed, than that esurient book-makers should continue and debase a brave tradition, and lower, in their own eyes, a famous race. Better that our serene temples were deserted than filled with trafficking and juggling priests."
And in the first essay, speaking on matters of style:
"The conjurer juggles with two oranges, and our pleasure in beholding him springs from this, that neither is for an instant overlooked or sacrificed. So with the writer. His pattern, which is to please the supersensual ear, is yet addressed, throughout and first of all, to the demands of logic. Whatever be the obscurities, whatever the intricacies of the argument, the neatness of the fabric must not suffer, or the artist has been proved unequal to his design. And, on the other hand, no form of words must be selected, no knot must be tied among the phrases, unless knot and word be precisely what is wanted to forward and illuminate the argument; for to fail in this is to swindle in the game. The genius of prose rejects the cheville no less emphatically than the laws of verse; and the cheville, I should perhaps explain to some of my readers, is any meaningless or very watered phrase employed to strike a balance in the sound. Pattern and argument live in each other; and it is by the brevity, clearness, charm, or emphasis of the second, that we judge the strength and fitness of the first."
AI doesn't "write" in the sense used above. It has no ear, no wit, no soul. "A reflection of a mind is not a mind", as Phillip Ball writes in "AI Is the Black Mirror" (2).
1: https://www.gutenberg.org/cache/epub/492/pg492-images.html#p...
2: https://nautil.us/ai-is-the-black-mirror-1169121/
spwa4|13 days ago
co_king_5|13 days ago
[deleted]
AreShoesFeet000|13 days ago
ZoomZoomZoom|12 days ago
kaycey2022|12 days ago
The entire article sounds like AI generated opinion.
book_mike|13 days ago
zahlman|13 days ago
At any rate, it seems to me like a reasonable label for what's described:
> Semantic ablation is the algorithmic erosion of high-entropy information. Technically, it is not a "bug" but a structural byproduct of greedy decoding and RLHF (reinforcement learning from human feedback).
> ...
> When an author uses AI for "polishing" a draft, they are not seeing improvement; they are witnessing semantic ablation.
The metaphor is very apt. Literal polishing is removal of outer layers. Compared to the near-synonym "erosion", "ablation" connotes a deliberate act (ordinarily I would say "conscious", but we are talking about LLMs here). Often, that which is removed is the nuance of near-synonyms (there is no pause to consider whether the author intended that nuance). I don't know if the "character" imparted by broader grammatical or structural choices can be called "semantic", but that also seems like a big part of what goes missing in the "LLM house style".
Bluntly: getting AI to "improve" writing, as a fully generic instruction, is naturally going to pull that writing towards how the AI writes by default. Because of course the AI's model of "writing quality" considers that style to be "the best"; that's why it uses it. (Even "consider" feels like anthropomorphizing too much; I feel like I'm hitting the limits of English expressiveness here.)
lurquer|13 days ago
Etc.
matternous|13 days ago
co_king_5|13 days ago
[deleted]
vessenes|13 days ago
Then the model will look for clusters that don't fit what the model consider's to be Hemingway/Colliers/Post-War and suggest in that fashion.
"edit this" -> blah
"imagine Tom Wolfe took a bunch of cocaine and was getting paid by the word to publish this after his first night with Aline Bernstein" -> probably less blah
aabhay|13 days ago
marquisdepolis|11 days ago
meowface|11 days ago
anematode|12 days ago
kirykl|13 days ago
swyx|13 days ago
the words TFA is looking for is mode collapse https://www.lesswrong.com/posts/t9svvNPNmFf5Qa3TA/mysteries-... and the author could herself learn to write more clearly.
nalllar|12 days ago
co_king_5|13 days ago
[deleted]
SignalStackDev|13 days ago
[deleted]
anon-3988|12 days ago
OT: I do agree with your assessment.
writeslowly|12 days ago
causal|12 days ago
In other words, this might not a problem that can be overcome in LLMs alone.
adrian-vega|3 days ago
[deleted]
co_king_5|13 days ago
[deleted]
CoastalCoder|13 days ago
I'm not sure what's driving this. It reminds me of SEO.
Arifcodes|12 days ago
[deleted]
JamesBarney|12 days ago
inquirerGeneral|12 days ago
[deleted]
black_13|13 days ago
[deleted]
co_king_5|13 days ago
[deleted]
dsf2d|13 days ago
If you thought Google's degredation of search quality was strategic manipulation, wait till you see what they do with tokens.