Slop seems like a good term for unwanted AI generated content.
But I wonder how much this is AI and how much we've sort of curated a slop pattern even before AI:
- Video game tips web pages with massive chunks of text / ads before you get to the inevitable answer of "hit A when X happens".
- The horrendous mess that Quora became with technically correct in some ways but also misleading answers to historical content.
- Medium articles about coding that are filled with irrelevant pics and blocks of text that are "not wrong" but also "not right" followed by weirdly specific code...
Agree. Content was the OG slop. Buzzfeed with monkeys on typewriters.
The problem is that dopamine addicts generate outsized engagement. I know a literal crack mom who spends a solid 90+ hours a week watching accident videos to keep her brain triggered. The algorithm caters to her. Send promotional emails daily or more, constant notifications, recommend the same few videos over and over. Gotta get in there before she clicks another car crash video.
IMHO: Marketing is a top societal evil right now. If the media machine wasn't so desperate for content, AI wouldn't be a fraction of the problem it is. But with everyone obsessing over the next piece of content, fake AI presentations are mandatory.
I think you're right. Since LLMs went mainstream, I've seen a lot of my colleagues' presentations and thought "was this written by ChatGPT?" but I've come to wonder if it's just given me the frame of mind to identify low-effort slop that lacks any original insight but uses all the right sorts of words and phrases, regardless of if it was authored by a human or not.
Yes, AI isn't entirely to blame for this - it's low quality, irrelevant and misleading content in general.
Also, we have to look at the incentives: advertisement. Somehow, this is acceptable to consumers, profitable for companies and profitable for publishers. How, is absolutely beyond me... and it won't change so long as Google has a majority in the "search"-space as they are directly profiting from this.
What AI gives us is vastly cheaper slop, so now it can be produced at a scale unimaginable to prior generations. No more paying some schmuck a penny a word to bang out "private label articles," so they were only practical as SEO. Now you can have a unique slop for every spam email, every search query!
I can see the use to describe AI spam, but I am starting to seeing people using it to describe anything they don't like, basically a replacement for "mid" with was highly used the last couple of years.
I noticed that when some people learn a new "trendy" word, they want to use it in every possible opportunity until it loses meaning.
The signal to noise being so bad on the web today is AI's most compelling use for me, it's better at getting a pretty-close-to-right answer than searching the web, with much less crap I have to block along the way.
But, consider that all that crap ended up on the web for a reason, and wonder how long before AI just injects it itself into its own results.
That's a result of natural selection forced by search engines. I think that's why I like ChatGPT so much. You can ask it very specific things and it will tell you exactly what you need. It does also output verbose answers by default, but you can control it by promoting for a short answer.
It's true... the quality of content on the Internet has a bunch of problems, and AI is just one of them. The economic incentives to trick people into staying on your page and scrolling as much as possible are a fundamental part of the problem. Politically-motivated ragebait and lies are a separate huge problem. AI-generated slop is also a problem for content quality and UX, but I'm far more concerned about the impact of AI on the value of human labor and intellectual property than I am about the UX of my search result pages.
Literally all of these are just the symptoms of declining ability of people (general public) to perform critical thinking.
The content/spam/slop is simply being tailored to be effective with its intended audience.
But that's not the scary part.
The difference with AI slop is just the enormity of the scale and speed at which we can produce it. Finally, a couple of data centers can produce more slop than the entirety of humanity, combined.
When I was a kid and I was told to write an essay "what is slop" teachers would give lots of extra points for dumping useless and somehow vaguely related information just to raise the word count. Answers along "slop is useless shit created only to serve as filler content to make money on stupid people" would get zero points, I was expected to write the history of slop, the etymology of the word, the cultural context, the projected future, blah blah blah, don't forget at least ten citations, even if they're even more useless than the essay I was writing and 100% pure unadulterated slop.
My master's thesis was on a topic that nobody else researched (it wasn't revolutionary, just a fun novel gimmick), so I had to write filler just to have a chapter on a topic possible to find references to, in order to get the citations count, even if the chapter wasn't relevant to the actual topic of the thesis
So yes, I think that the push to create slop was there even before computers became a thing, we just didn't recognize it
As with everything, I think it's scope and scale: Quora was always a cesspool, but now every single question has a machine-generated response that's frequently incorrect or misleading (sometimes in legally concerning ways, like one was for me recently).
I don’t think they are saying that the internet hasn’t been shit. It is. I think what they are saying is that it is about to get a whole lot shittier thanks to AI.
Anything that is a response to classic "SEO manipulation" to get higher ranking on a search engine result page to create an appearance that the content is higher value, is more comprehensive, or took more effort to produce does create net-slop. And that's been going on for 10+ years.
I guess the problem is that for the lazy, the ability to generate slop has accelerated significantly through the advent of AI. Slop creators have been disproportionately empowered by AI tools. People who create quality content still benefit from AI, but not to the same extent.
Its not a horrendous mess for me. It works very well. Everything depends on what content you interact with as the algorithm heavily shapes your content depending on what you interact with. Its no different from any other social network.
Yeah, slop isn't new, AI makes it easier to produce.
Other examples include those books where each chapter generously estimated has a tweet worth of thought padded out with 35 pages of meandering anecdotes that just paraphrase the same idea. Like it's very clearly a sort of scam, the padding is there to make it seem like the book has more information that it does when you look at it in a digital bookstore.
Yeah, and most of the reason for that can basically be summed up as "it's what Google incentivises".
They look for detailed pages, so pages are bloated with irrelevant information. They look for pages people spend a lot of time on, so the same thing occurs. Plus, the hellscape that is modern advertising means that rushing content out quickly and cheaply is encouraged over anything else.
AI will probably accelerate the process even more, but it's already been a huge issue for years now.
Both HN itself and prolific HN contributor simonw get shoutouts in the article:
“The term [‘slop’] has sprung up in 4chan, Hacker News and YouTube comments, where anonymous posters sometimes project their proficiency in complex subject matter by using in-group language.”
“Some have identified Simon Willison, a developer, as an early adopter of the term — but Mr. Willison, who has pushed for the phrase’s adoption, said it was in use long before he found it. ‘I think I might actually have been quite late to the party!’ he said in an email.”
The first substantive discussion of the word here seems to be this:
I'm a huge Neal Stephenson fan. Cryptonomicon is to this day one of my all-time favorite books. Years ago now I read Anathem. It wasn't as good but it had some really interesting ideas.
One such idea was how the Internet was filled with garbage by all these agents (which were implied or stated were AI, I can't recall). They would subtly change things to be wrong. Why? Essentially to sell you a solution to this that filters out all the crap.
Currently we rely a lot on altruism for much of the information on the Internet (eg Wikipedia). AI agents will get harder and harder to differentiate from actual humans making Wikipedia edits. I don't think we're that far away from human-vs-AI Wikipedia edit wars.
I really wonder how much human knowledge will be destroyed by (intentional or otherwise) AI vandalism in the future.
I see no alternative if people are unwilling to actually pay for content. It's just going to be individualized slop feeds on every advertising based media app until they get tired of that (zero sign of that coming).
Maybe the algorithms will be so good, and enough creative people will use these tools to generate truly exciting content that they wouldn't have been able to otherwise but it just looks totally dire to me for creatives at this moment.
The LLM/generative-AI genie is out of the lamp. I'm just some random midwit, but some predictions:
- Slop will continue to become cheaper to generate, and people will only notice the obvious stuff
- Hyperpersonalized content will abound, yet authenticity will run dry
- The lack of authenticity in electronic channels will drive a small segment of people offline into less fakeable (for now) social contexts
- Humans online will walk a treadmill of increasingly convoluted shibboleths / Gnirut tests (reverse Turing tests ;)) to self-identify as likely not AI-generated, i.e., subtly run-on sentences that are intelligible but slightly non-conformist to prevailing AI model outputs, and usage of old-school emoticons and other quirks
- Humans will walk on similar "Gnirut treadmills" for visual art, speech, video, and music
- AI models will gladly chase humans along these Gnirut treadmills, filling in canyons and sections of the Uncanny Valley with fractally sophisticated humanlike content
Fifteen years ago (I remember the apartment where I had this thought), it occurred to me that time was running out to write an authentic novel. Soon, computers would generate whole stories in an endless variety of styles, and even if future authors would hand write a book from start to finish, they would likely have been influenced by other artificial writing at some point. Readers would be unable to emotionally connect with authors due to the nagging awareness that the text might have been fully or partially generated by an unthinking, unfeeling machine.
Though I try, I fail to think of a comparable scenario in our past, at least as relates to language. You can look around whatever room you are in and try to identify an object that was made by human hands rather than a factory process. That's a fact that always makes me a bit sad. I think we're headed in a similar direction with the language we consume. Craftsmanship falls by the wayside, and our world loses even more of the human touch that connects us with one another.
- The lack of authenticity in electronic channels will drive a small segment of people offline into less fakeable (for now) social contexts
I think this segment might start small but I think it will grow rapidly if the utility of the internet is dwarfed with low quality crap. The belief that non-technical people won't catch on to the shenanigans and simply look elsewhere is a bad bet some are making and I think everyone living on the internet during covid gave non-technical people an intuitive feeling for all the manipulation and tenuous quality of the internet as a tool/public utility they can trust in any form.
Once we get to the point where models are or are nearly continuous learning and are getting data streamed from thousands of sources I feel it may be very hard to figure out if humans are leading the Gnirut or following.
I weirdly think the shibbolethization of human culture will be a good thing because it will encourage everyone to be creative, lest they be accused of being a bot and ignored.
Slop has been around a while. I was researching a topic, and noticed that most of the top search results had the same misunderstanding of some of the definitions. The writers were clearly not familiar with the topic, and I'm sure they were just copying each other. All of the articles pre-dated GPT-3.5.
The kicker is that if you ask GPT-4 about it, it spits out the same incorrect information, meaning that GPT-4 was likely trained on this bad data. FWIW, GPT-4o gives a much more accurate response.
So soon you'll go to a news website and get the political filter bubble that reinforces - or outrages - your prejudices to maximize your engagement. And in the middle of it, the AI will slip in that the brand of grill that caused the fire was rumoured to be ${insert_name_of_competitor_here} etc?
The big future for AI is to move slop beyond outrage and into intimacy territory. If rage was the engagement of the last ten years, then ending up only talking to AIs who pretend to care will be the even more addictive engagement of the next ten :(
Is slop new or is it just a continuation of SEO, blogspam, and “content”? I love that we have a new better word that captures the nuance, but it doesn’t feel like a new phenomena.
Slop is not a new term, it refers to content people like (perjoratively, similar to how fast food is something people like) but for whatever reason (low brow? bad for you? clickbait? lowest common denominator?) should not be classified as good quality content. This slang is older than AI, possibly by decades.
NYT writer discovers the term 'slop' days after NYT source leaks on 4chan. Specifically on a board where this is a common phrase. Cites 4chan. I'm connecting the dots
All this talk and jargon is indicative of a mass existential crisis as humanity is faced with the reality that many of its shared cultural artifacts are essentially frivolous.
People should call it for what it is. Tried to find some answers on Google earlier in the day, and the first result pages were 100% generated slop. Funnily enough, any AI summary of the slop would be slop squared.
It's everywhere, and I hate it. What ways do people have to combat it out of their day?
Like with all terms originating from 4chan, some people will try to reject it and make it the new n-word like they're paid to do this. The irony is they do it for free on a Friday afternoon. At the end of the day normal people who don't larp as Internet hall monitors don't care and adopt the term anyway. Many such cases. And people haven't learned in the 20 years this kept happening over and over again.
There’s going to be a lot of garbage content out there—but isn’t there already? People have been writing junk to try to get search engine placement for 20+ years.
I’m not necessarily seeing the slop problem. People should always have been skeptical of content on untrusted websites.
Now, if reputable sources start trying to pump out content with AI, that’d be a problem. I suspect for those who try, they’ll quickly lose their reputation.
"Slop" is a general term, you can create slop as a human, for example YouTubers who upload daily talking about the latest Twitter drama they are usually referred to as making slop, especially if they have a main channel where they upload high quality content. It's not a new term and it has little to do with AI by itself.
I think it's pretty obvious that the term "slop" has food origins. When you think of "slop", you think of oily, greasy fast food, or disgusting amounts of sugar, syrup, icing, etc. The food allegory strikes again. When someone says something is "slop", they obviously mean mass-produced content that regular people willingly consume at their detriment, because it appeals to our most primitive desires. Something lacking of substance, non-challenging material, "Roller coaster" content.
[+] [-] Joeboy|1 year ago|reply
[+] [-] duxup|1 year ago|reply
But I wonder how much this is AI and how much we've sort of curated a slop pattern even before AI:
- Video game tips web pages with massive chunks of text / ads before you get to the inevitable answer of "hit A when X happens".
- The horrendous mess that Quora became with technically correct in some ways but also misleading answers to historical content.
- Medium articles about coding that are filled with irrelevant pics and blocks of text that are "not wrong" but also "not right" followed by weirdly specific code...
We had all that before AI.
[+] [-] imabotbeep2937|1 year ago|reply
The problem is that dopamine addicts generate outsized engagement. I know a literal crack mom who spends a solid 90+ hours a week watching accident videos to keep her brain triggered. The algorithm caters to her. Send promotional emails daily or more, constant notifications, recommend the same few videos over and over. Gotta get in there before she clicks another car crash video.
IMHO: Marketing is a top societal evil right now. If the media machine wasn't so desperate for content, AI wouldn't be a fraction of the problem it is. But with everyone obsessing over the next piece of content, fake AI presentations are mandatory.
[+] [-] wdutch|1 year ago|reply
[+] [-] flessner|1 year ago|reply
Also, we have to look at the incentives: advertisement. Somehow, this is acceptable to consumers, profitable for companies and profitable for publishers. How, is absolutely beyond me... and it won't change so long as Google has a majority in the "search"-space as they are directly profiting from this.
[+] [-] giancarlostoro|1 year ago|reply
What kills me is I have to hunt for the answer in Quora now. I just treat quora like I do pinterest, just back out and never return.
[+] [-] tivert|1 year ago|reply
What AI gives us is vastly cheaper slop, so now it can be produced at a scale unimaginable to prior generations. No more paying some schmuck a penny a word to bang out "private label articles," so they were only practical as SEO. Now you can have a unique slop for every spam email, every search query!
Truly, we are making the world a better place.
[+] [-] henriquecm8|1 year ago|reply
[+] [-] threetonesun|1 year ago|reply
But, consider that all that crap ended up on the web for a reason, and wonder how long before AI just injects it itself into its own results.
[+] [-] tananaev|1 year ago|reply
[+] [-] GaggiX|1 year ago|reply
[+] [-] kokanee|1 year ago|reply
[+] [-] badgersnake|1 year ago|reply
[+] [-] antisthenes|1 year ago|reply
But that's not the scary part.
The difference with AI slop is just the enormity of the scale and speed at which we can produce it. Finally, a couple of data centers can produce more slop than the entirety of humanity, combined.
[+] [-] anal_reactor|1 year ago|reply
My master's thesis was on a topic that nobody else researched (it wasn't revolutionary, just a fun novel gimmick), so I had to write filler just to have a chapter on a topic possible to find references to, in order to get the citations count, even if the chapter wasn't relevant to the actual topic of the thesis
So yes, I think that the push to create slop was there even before computers became a thing, we just didn't recognize it
[+] [-] woodruffw|1 year ago|reply
[+] [-] djaouen|1 year ago|reply
[+] [-] cptcobalt|1 year ago|reply
[+] [-] Grimblewald|1 year ago|reply
[+] [-] zrn900|1 year ago|reply
Its not a horrendous mess for me. It works very well. Everything depends on what content you interact with as the algorithm heavily shapes your content depending on what you interact with. Its no different from any other social network.
[+] [-] guidoism|1 year ago|reply
[+] [-] lux|1 year ago|reply
[+] [-] gravescale|1 year ago|reply
[+] [-] marginalia_nu|1 year ago|reply
Other examples include those books where each chapter generously estimated has a tweet worth of thought padded out with 35 pages of meandering anecdotes that just paraphrase the same idea. Like it's very clearly a sort of scam, the padding is there to make it seem like the book has more information that it does when you look at it in a digital bookstore.
[+] [-] p_l|1 year ago|reply
Just like simple template generated SEO, template-written "content", etc. before.
In fact, a lot of writing about AI slop could be considered just as much slop...
[+] [-] CM30|1 year ago|reply
They look for detailed pages, so pages are bloated with irrelevant information. They look for pages people spend a lot of time on, so the same thing occurs. Plus, the hellscape that is modern advertising means that rushing content out quickly and cheaply is encouraged over anything else.
AI will probably accelerate the process even more, but it's already been a huge issue for years now.
[+] [-] greghinkleman|1 year ago|reply
[deleted]
[+] [-] tkgally|1 year ago|reply
“The term [‘slop’] has sprung up in 4chan, Hacker News and YouTube comments, where anonymous posters sometimes project their proficiency in complex subject matter by using in-group language.”
“Some have identified Simon Willison, a developer, as an early adopter of the term — but Mr. Willison, who has pushed for the phrase’s adoption, said it was in use long before he found it. ‘I think I might actually have been quite late to the party!’ he said in an email.”
The first substantive discussion of the word here seems to be this:
https://news.ycombinator.com/item?id=40301490
[+] [-] cletus|1 year ago|reply
One such idea was how the Internet was filled with garbage by all these agents (which were implied or stated were AI, I can't recall). They would subtly change things to be wrong. Why? Essentially to sell you a solution to this that filters out all the crap.
Currently we rely a lot on altruism for much of the information on the Internet (eg Wikipedia). AI agents will get harder and harder to differentiate from actual humans making Wikipedia edits. I don't think we're that far away from human-vs-AI Wikipedia edit wars.
I really wonder how much human knowledge will be destroyed by (intentional or otherwise) AI vandalism in the future.
[+] [-] tromp|1 year ago|reply
[1] https://www.businessinsider.com/ai-spam-google-ruin-internet...
[2] https://www.ghostery.com/blog/how-to-prepare-for-ai-spam
[3] https://www.theregister.com/2024/04/13/google_ai_spam/
[+] [-] wy35|1 year ago|reply
Spam is trying to sell you something, e.g. an unsolicited email peddling a supplements.
Slop is low-quality content, e.g. someone taking a bunch of bird pictures off Google and posting it in a birding Facebook group.
Spam is an ad, slop is not. With AI, it is now much easier to generate slop.
[+] [-] fullshark|1 year ago|reply
Maybe the algorithms will be so good, and enough creative people will use these tools to generate truly exciting content that they wouldn't have been able to otherwise but it just looks totally dire to me for creatives at this moment.
[+] [-] EForEndeavour|1 year ago|reply
- Slop will continue to become cheaper to generate, and people will only notice the obvious stuff
- Hyperpersonalized content will abound, yet authenticity will run dry
- The lack of authenticity in electronic channels will drive a small segment of people offline into less fakeable (for now) social contexts
- Humans online will walk a treadmill of increasingly convoluted shibboleths / Gnirut tests (reverse Turing tests ;)) to self-identify as likely not AI-generated, i.e., subtly run-on sentences that are intelligible but slightly non-conformist to prevailing AI model outputs, and usage of old-school emoticons and other quirks
- Humans will walk on similar "Gnirut treadmills" for visual art, speech, video, and music
- AI models will gladly chase humans along these Gnirut treadmills, filling in canyons and sections of the Uncanny Valley with fractally sophisticated humanlike content
[+] [-] 4star3star|1 year ago|reply
Though I try, I fail to think of a comparable scenario in our past, at least as relates to language. You can look around whatever room you are in and try to identify an object that was made by human hands rather than a factory process. That's a fact that always makes me a bit sad. I think we're headed in a similar direction with the language we consume. Craftsmanship falls by the wayside, and our world loses even more of the human touch that connects us with one another.
[+] [-] scrps|1 year ago|reply
I think this segment might start small but I think it will grow rapidly if the utility of the internet is dwarfed with low quality crap. The belief that non-technical people won't catch on to the shenanigans and simply look elsewhere is a bad bet some are making and I think everyone living on the internet during covid gave non-technical people an intuitive feeling for all the manipulation and tenuous quality of the internet as a tool/public utility they can trust in any form.
[+] [-] pixl97|1 year ago|reply
[+] [-] noman-land|1 year ago|reply
[+] [-] chromaton|1 year ago|reply
The kicker is that if you ask GPT-4 about it, it spits out the same incorrect information, meaning that GPT-4 was likely trained on this bad data. FWIW, GPT-4o gives a much more accurate response.
[+] [-] willvarfar|1 year ago|reply
And beyond slop, there will be AI models that do product placement. OpenAI's "publisher partnerships" deck explains https://news.ycombinator.com/item?id=40310228
So soon you'll go to a news website and get the political filter bubble that reinforces - or outrages - your prejudices to maximize your engagement. And in the middle of it, the AI will slip in that the brand of grill that caused the fire was rumoured to be ${insert_name_of_competitor_here} etc?
The big future for AI is to move slop beyond outrage and into intimacy territory. If rage was the engagement of the last ten years, then ending up only talking to AIs who pretend to care will be the even more addictive engagement of the next ten :(
[+] [-] Swizec|1 year ago|reply
[+] [-] antifa|1 year ago|reply
[+] [-] acureau|1 year ago|reply
[+] [-] tolerance|1 year ago|reply
[+] [-] fluffet|1 year ago|reply
People should call it for what it is. Tried to find some answers on Google earlier in the day, and the first result pages were 100% generated slop. Funnily enough, any AI summary of the slop would be slop squared.
It's everywhere, and I hate it. What ways do people have to combat it out of their day?
[+] [-] compiler1410|1 year ago|reply
[+] [-] 1vuio0pswjnm7|1 year ago|reply
https://web.archive.org/web/20240611214752if_/https://www.ny...
[+] [-] bazil376|1 year ago|reply
I’m not necessarily seeing the slop problem. People should always have been skeptical of content on untrusted websites.
Now, if reputable sources start trying to pump out content with AI, that’d be a problem. I suspect for those who try, they’ll quickly lose their reputation.
[+] [-] GaggiX|1 year ago|reply
[+] [-] coldblues|1 year ago|reply
https://www.youtube.com/watch?v=wyoNGSKWIaw
[+] [-] unknown|1 year ago|reply
[deleted]