> Was this physically difficult to write? If it flowed out effortlessly in one go, it's usually fluff.
Probably my best and most insightful stuff has been produced more or less effortlessly, since I spent enough time/effort _beforehand_ getting to know the domain and issue I was interested in from different angles.
When I try writing fluff or being impressive without putting in the work first, I usually bump up against all the stuff I don't have a clear picture of yet, and it becomes a neverending slog. YMMV.
My most successful blog post was written about something I felt strongly about, backed by knowledge and a lot of prior thought. It was written with passion.
People asked for permission to repost it, it got shared on social media, it ended up showing higher in Google than a Time magazine (I think) interview of Bill Gates with the same title.
> If I subconsciously detect that you spent 12 seconds creating this, why should I invest five minutes reading it?
The problem is it isn't easy to detect it and I'm sure the people who work on generated stuff will work hard to make detection even harder.
I have difficulty detecting even fake videos. How can I possibly I detect generated text in plain text accurately? I mean I will make plenty of false positive mistakes, accusing people of using generated text when they wrote it themselves. This will cause unnecessary friction which I don't know how to prevent.
First thought: In my experience, this is a muscle we build over time. Humans are pretty great at pattern detection, but we need some time to get there with new input. Remember 3D graphics in movies ~15 years ago? Looked mind blowingly realistic. Watching old movies now, I find they look painfully fake. YMMV of course.
Second thought: Does it _really_ matter? You find it interesting, you continue reading. You don't like it, you stop reading. That's how I do it. If I read something from a human, I expect it to be their thoughts. I don't know if I should expect it to be their hand typing. Ghost writers were a thing long before LLMs. That said, it wouldn't even _occur_ to me to generate anything I want to say. I don't even spell check. But that's me. I can understand that others do it differently.
Fair point. This is more mindset than case study. The proof is still being built across client work. Though I'd say the same was true for SEO in the early days. People speculating on what made Google rank certain sites higher, what made pages index faster, etc. The frameworks came before the proven playbooks
This points out a general theory / concept I've been working on for the past few weeks - once software development and decent enough copy writing can be done by the LLM - you are indeed only bottlenecked by your own knowledge / creativity. The LLM, as much other hype is out around there about it is - is still just a latent tool and will do absolutely nothing if you don't interact with it of course!
I see LLMs more and more like a mirror - if YOU can orchestrate high-level knowledge and have a brutally clear vision of what you want and prompt correspondingly, things will go well for you (I suppose this all comes back to 'context engineering' just with higher specificity on what you are actually prompting) turns out domain knowledge, time/experience-built wisdom, and experience in niches, whatever they may be - will and always will be valuable!
I see the same point when it comes to fiction writing. Tested (via duck.ai) a while ago with creating fiction stories in less than 500 characters and it came up with generics and repeats that even went above the limit.
Tried again just now with 5o mini, and although it waxed poetically, there were cracks and gaps, still felt rather generic, and certainly failed at twists and humour.
It can write about a spark, but the content has no spark.
I had a really great results with DeepSeek V3.2, but in the carefully set up environment (scenario, main characters, certain points for the storyline).
It came up engaging, refreshing and in some parts punching really hard.
The central idea that we all have the same tools which now represent an infrastructure baseline, therefore we need to look harder to establish our moats (not just in knowing things although that's one) is sound and well put. Thanks.
I think the most valuable intellectual skill remaining is contrarian thinking which happens to be correct.
LLMs are naive and have a very mainstream view on things; this often leads them down suboptimal paths. If you can see through some of the mainstream BS on a number of topics, you can help LLMs avoid mistakes. It helps if you can think from first principles.
I love using LLMs but I wouldn't trust one to write code unsupervised for some of my prized projects. They work incredibly well with supervision though.
Important qualifier there. There's a massive oversupply of contrarian thinking; it's cheap, popular (populist), and incorrect. All you have to do is take some piece of conventional wisdom and write the opposite. You don't have to supply evidence, or if you do then a single cherry-picked piece will suffice.
I'd say something more like "Chesterton's Fence Inspection Company": there are reasons why things are the way they are, but if you dig into them, maybe you will find that the assumptions are no longer true? Or they turn out to be still true and important.
Nothing new in the article imho. But it's a nice overview of what content creators are facing, and what to look for when carving out a niche.
The #1 point really: have access to data / experiences / expert knowledge that's unique & can't be distilled from public sources and/or scraped from the internet. This has always been the case. It just holds more weight when AI agents are everywhere.
I don't disagree with the main thesis, but I do think it's relatively easy for skilled writers to outperform LLMs in terms of clarity and impact. Whether or not that advantage makes any business sense is another question.
I disagree. The moat now is being able to understand, and then communicate that understanding to others, even when they resist understanding. Crack that, and you'll save this civilization from all the immature shortsighted thinkers.
Agreed. You may know so many things, but ultimately its useless if the other party does not care about wanting to understand them. And I have no clue what the right way is, besides letting people and their models fail and then being there with an answer ...
Content is not meant to imply fungibility by being nonspecific. It is supposed to represent an acknowledgement of diversity across a wide range of activities.
The term content creator represents inclusivity, not genericity.
You have used the term information as a candidate for an alternative. What if someone is sharing an experience, an artwork, or simply something they found to be beautiful? There may be an information component to some of those things but the primary reason that they were offered isn't to be informative.
You don't seek content any more than you seek words. You may read books made of words but it is what the book is about that you look for. The same goes for content, only with a broader spectrum. You seek things that you like, things that you value. Content, being nonspecific, means your horizons can be broad.
> The data backs this up. 54% of LinkedIn posts are now likely AI-written (Originality.ai). 15% of Reddit posts too, up 146% since 2021. Every competitor has the same capability to generate keyword-optimized, structurally correct, grammatically polished content. In about twelve seconds.
I don't know antything about marketing given that the first paragraph of blog post makes it clear its from marketing context.
But, as a user or literally just a bystander. using AI isn't really good. I mean for linked-in posts I guess. Isn't the whole point to stand out by not using AI in Linked-in.
Like I can see a post which can have an ending by,
Written with love & a passion by a fellow human, Peace.
And It would be a better / different than this.
Listen man, I am from Third world country too and I had real issues with my grammar. Unironically this was the first advice that I got from people on HN and I was suddenly conscious about it & I tried to improve.
Now I get called AI slop for writing how I write. So to me, its painful to see that my improvement in this context just gets thrown out of the window if someone calls some comment I write here or anywhere else AI slop.
I guess I have used AI and I have pasted in my messages in there to find that it can write like me but I really don't use that (on HN, I only used it once for testing purpose on one discord user iirc) but my point is, I will write things myself and if people call me AI slop, I can really back down the claim that its written by a human, just ask me anything about it.
I don't really think that people who use AI themselves are able to say something back if someone critiques something as AI slop.
I was talking to a friend once, we started debating philosophy. He gave me his medium article, I was impressed but observed the --, I asked him if it was written by AI, he said that it was his ideas but he wrote/condensed it with AI (once again third world country and honestly same response as the original person)
And he was my friend, but I still left thinking hmm, If you are unable to take time with your project to write something, then that really lessens my capacity to read it & I even said to him that I would be more interested in reading his prompts & just started discussing the philosophy itself with him.
And honestly the same point goes to AI generated code projects even though I have vibe coded many but I am unable to read them or find the will to read it at many times if its too verbose or not to my liking. Usually though in that context, its more so just prototypes for personal use case but I still end up open sourcing if someone might be interested ig given it costs nothing for me to open source it.
bschne|1 month ago
Probably my best and most insightful stuff has been produced more or less effortlessly, since I spent enough time/effort _beforehand_ getting to know the domain and issue I was interested in from different angles.
When I try writing fluff or being impressive without putting in the work first, I usually bump up against all the stuff I don't have a clear picture of yet, and it becomes a neverending slog. YMMV.
graemep|1 month ago
People asked for permission to repost it, it got shared on social media, it ended up showing higher in Google than a Time magazine (I think) interview of Bill Gates with the same title.
jaapz|1 month ago
mcny|1 month ago
The problem is it isn't easy to detect it and I'm sure the people who work on generated stuff will work hard to make detection even harder.
I have difficulty detecting even fake videos. How can I possibly I detect generated text in plain text accurately? I mean I will make plenty of false positive mistakes, accusing people of using generated text when they wrote it themselves. This will cause unnecessary friction which I don't know how to prevent.
fhd2|1 month ago
Second thought: Does it _really_ matter? You find it interesting, you continue reading. You don't like it, you stop reading. That's how I do it. If I read something from a human, I expect it to be their thoughts. I don't know if I should expect it to be their hand typing. Ghost writers were a thing long before LLMs. That said, it wouldn't even _occur_ to me to generate anything I want to say. I don't even spell check. But that's me. I can understand that others do it differently.
_tk_|1 month ago
Growtika|1 month ago
fullstackchris|1 month ago
I see LLMs more and more like a mirror - if YOU can orchestrate high-level knowledge and have a brutally clear vision of what you want and prompt correspondingly, things will go well for you (I suppose this all comes back to 'context engineering' just with higher specificity on what you are actually prompting) turns out domain knowledge, time/experience-built wisdom, and experience in niches, whatever they may be - will and always will be valuable!
JamesTRexx|1 month ago
It can write about a spark, but the content has no spark.
subscribed|1 month ago
It came up engaging, refreshing and in some parts punching really hard.
Mostly it's not as good indeed.
jmkd|1 month ago
jongjong|1 month ago
LLMs are naive and have a very mainstream view on things; this often leads them down suboptimal paths. If you can see through some of the mainstream BS on a number of topics, you can help LLMs avoid mistakes. It helps if you can think from first principles.
I love using LLMs but I wouldn't trust one to write code unsupervised for some of my prized projects. They work incredibly well with supervision though.
pjc50|1 month ago
Important qualifier there. There's a massive oversupply of contrarian thinking; it's cheap, popular (populist), and incorrect. All you have to do is take some piece of conventional wisdom and write the opposite. You don't have to supply evidence, or if you do then a single cherry-picked piece will suffice.
I'd say something more like "Chesterton's Fence Inspection Company": there are reasons why things are the way they are, but if you dig into them, maybe you will find that the assumptions are no longer true? Or they turn out to be still true and important.
RetroTechie|1 month ago
The #1 point really: have access to data / experiences / expert knowledge that's unique & can't be distilled from public sources and/or scraped from the internet. This has always been the case. It just holds more weight when AI agents are everywhere.
beej71|1 month ago
bsenftner|1 month ago
p_v_doom|1 month ago
Agreed. You may know so many things, but ultimately its useless if the other party does not care about wanting to understand them. And I have no clue what the right way is, besides letting people and their models fail and then being there with an answer ...
xnx|1 month ago
dSebastien|1 month ago
hwj|1 month ago
nottorp|1 month ago
If you're worried about producing "content", the completion bots have caught up with you.
See the other posts calling the article "a Linkedin post". Those were slop even before LLMs.
Now if you have some information you want to share, that's another topic...
Lerc|1 month ago
The term content creator represents inclusivity, not genericity.
You have used the term information as a candidate for an alternative. What if someone is sharing an experience, an artwork, or simply something they found to be beautiful? There may be an information component to some of those things but the primary reason that they were offered isn't to be informative.
You don't seek content any more than you seek words. You may read books made of words but it is what the book is about that you look for. The same goes for content, only with a broader spectrum. You seek things that you like, things that you value. Content, being nonspecific, means your horizons can be broad.
jdthedisciple|1 month ago
zvqcMMV6Zcr|1 month ago
Growtika|1 month ago
PurpleRamen|1 month ago
Imustaskforhelp|1 month ago
I don't know antything about marketing given that the first paragraph of blog post makes it clear its from marketing context.
But, as a user or literally just a bystander. using AI isn't really good. I mean for linked-in posts I guess. Isn't the whole point to stand out by not using AI in Linked-in.
Like I can see a post which can have an ending by,
Written with love & a passion by a fellow human, Peace.
And It would be a better / different than this.
Listen man, I am from Third world country too and I had real issues with my grammar. Unironically this was the first advice that I got from people on HN and I was suddenly conscious about it & I tried to improve.
Now I get called AI slop for writing how I write. So to me, its painful to see that my improvement in this context just gets thrown out of the window if someone calls some comment I write here or anywhere else AI slop.
I guess I have used AI and I have pasted in my messages in there to find that it can write like me but I really don't use that (on HN, I only used it once for testing purpose on one discord user iirc) but my point is, I will write things myself and if people call me AI slop, I can really back down the claim that its written by a human, just ask me anything about it.
I don't really think that people who use AI themselves are able to say something back if someone critiques something as AI slop.
I was talking to a friend once, we started debating philosophy. He gave me his medium article, I was impressed but observed the --, I asked him if it was written by AI, he said that it was his ideas but he wrote/condensed it with AI (once again third world country and honestly same response as the original person)
And he was my friend, but I still left thinking hmm, If you are unable to take time with your project to write something, then that really lessens my capacity to read it & I even said to him that I would be more interested in reading his prompts & just started discussing the philosophy itself with him.
And honestly the same point goes to AI generated code projects even though I have vibe coded many but I am unable to read them or find the will to read it at many times if its too verbose or not to my liking. Usually though in that context, its more so just prototypes for personal use case but I still end up open sourcing if someone might be interested ig given it costs nothing for me to open source it.