Just don't use LLMs to generate text you want other humans to read. Think and then write. If it isn't worth your effort, it certainly isn't worth your audience's.
It comes down to this for me as well. Just the same way I never open auto generated mails, I see no reason to read text other people have got an LLM to write for them.
What is nice is that sometimes you will just write very badly what you want to say, like a scenario or badly written sentences, and just ask the LLM to reformulate that to be proper nicely written text.
But in that case, there are big chances that the stylistic issues described in the article are present despite you having carefully crafted the content.
> The elephant in the room is that we’re all using AI to write but none of us wants to feel like we’re reading AI generated content.
My initial reaction to the first half of this sentence was "Uhh, no?", but then i realized it's on substack, so probably more typical for that particular type of writer (writing to post, not writing to be read). I don't even let it write documentation or other technical things anymore because it kept getting small details wrong or injecting meaning in subtle ways that isn't there.
The main problem for me aren't even the eye-roll inducing phrases from the article (though they don't help), it's that LMs tend to subtly but meaningfully alter content, causing the effect of the text to be (at best slightly) misaligned with the effect I intended. It's sort of an uncanny valley for text.
Along with the problems above, manual writing also serves as a sort of "proof-of-work" establishing credibility and meaning of an article - if you didn't bother taking the time to write it, why should i spend my time reading it?
Had the same thought reading this. I haven't found a place for LLMs in my writing and I'm sure many people have the same experience.
I'm sure it's great for pumping out SEO corporate blogposts. How many articles are out there already on the "hidden costs of micromanagement", to take an example from this post, and how many people actually read them? For original writing, if you don't have enough to say or can't [bother] putting your thoughts into coherent language, that's not something AI can truly help with in my experience. The result will be vague, wordy and inconsistent. No amount of patching-over, the kind of "deslopification" this post proposes, will help salvage something minimum work has been put into.
Indeed. I have never used an LLM to write. And coding agents are terrible at writing documentation, it's just bullet points with no context and unnecessary icons that are impossible to understand. There's no flow to the text, no actual reasoning (only confusing comments about changes made during the development that are absolutely irrelevant to the final work), and yet somehow too long.
The elephant in the room is that AI is allowing developers who previously half-assed their work to now quarter-ass it.
Please try and follow this advice, because there's nothing more annoying than some comic book guy wannabe moaning about AI tells while I'm trying to enjoy the discussion.
You just need to use this list as a prompt and instruct the LLM to avoid this kind of slop. If you want to be serious about it, you can even use some of these slop detectors and iterate through a loop until the top three detectors rate your text as "very likely human."
There’s a really cool technique Andrew Ng nicknamed reflection, where you take the AI output and feed it back in, asking the model to look at it - reflect on it - in light of some other information.
Getting the writing from your model then following up with “here’s what you wrote, here’re some samples of how I wrote, can you redo that to match?” makes its writing much less slop-y.
It will definitely help, but also some people, especially in marketing/sales, were writing like that before LLMs. So you should not only write the thing yourself, but also learn some good writing style.
piker|11 days ago
fastasucan|11 days ago
greatgib|11 days ago
varjag|11 days ago
https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing
eddyg|10 days ago
https://github.com/blader/humanizer
tahigichigi|11 days ago
fxwin|11 days ago
My initial reaction to the first half of this sentence was "Uhh, no?", but then i realized it's on substack, so probably more typical for that particular type of writer (writing to post, not writing to be read). I don't even let it write documentation or other technical things anymore because it kept getting small details wrong or injecting meaning in subtle ways that isn't there.
The main problem for me aren't even the eye-roll inducing phrases from the article (though they don't help), it's that LMs tend to subtly but meaningfully alter content, causing the effect of the text to be (at best slightly) misaligned with the effect I intended. It's sort of an uncanny valley for text.
Along with the problems above, manual writing also serves as a sort of "proof-of-work" establishing credibility and meaning of an article - if you didn't bother taking the time to write it, why should i spend my time reading it?
Antibabelic|11 days ago
I'm sure it's great for pumping out SEO corporate blogposts. How many articles are out there already on the "hidden costs of micromanagement", to take an example from this post, and how many people actually read them? For original writing, if you don't have enough to say or can't [bother] putting your thoughts into coherent language, that's not something AI can truly help with in my experience. The result will be vague, wordy and inconsistent. No amount of patching-over, the kind of "deslopification" this post proposes, will help salvage something minimum work has been put into.
idop|11 days ago
The elephant in the room is that AI is allowing developers who previously half-assed their work to now quarter-ass it.
rrherr|10 days ago
Reminds me of a quote from St. Augustine's autobiography, "Confessions":
"I have known many men who wished to deceive, but none who wished to be deceived."
stuaxo|11 days ago
oytis|11 days ago
tahigichigi|11 days ago
Der_Einzige|11 days ago
tahigichigi|11 days ago
What would you say are the top 2 red flags missing from the piece? Would love to know
happytoexplain|10 days ago
This makes me sick.
svilen_dobrev|10 days ago
And even if the style is (or isn't) LLM'ish, it does not say/help if the (even-filtered) content makes sense / is correct or is BS
Style does matter, sure..
https://hbr.org/1982/05/what-do-you-mean-you-dont-like-my-st...
throawayonthe|11 days ago
tahigichigi|11 days ago
Leynos|11 days ago
randomtoast|11 days ago
greatgib|11 days ago
tahigichigi|11 days ago
cadamsdotcom|11 days ago
Getting the writing from your model then following up with “here’s what you wrote, here’re some samples of how I wrote, can you redo that to match?” makes its writing much less slop-y.
tahigichigi|11 days ago
AI can copy 90% of your tone of voice but still use em dashes and corrective antithesis.
Ideally you'll have both /deslop and /soundlikeme (coming soon)
mold_aid|11 days ago
oytis|11 days ago
tahigichigi|11 days ago