(no title)
COAGULOPATH | 1 year ago
(I mean "nobody" in the sense of "nobody likes Nickelback". ie, not literally nobody.)
If I want to talk to an AI, I can talk to an AI. If I'm reading a blog or a discussion forum, it's because I want to see writing by humans. I don't want to read a wall of copy+pasted LLM slop posted under a human's name.
I now spend dismaying amounts of time and energy avoiding LLM content on the web. When I read an article, I study the writing style, and if I detect ChatGPTese ("As we dive into the ever-evolving realm of...") I hit the back button. When I search for images, I use a wall of negative filters (-AI, -Midjourney, -StableDiffusion etc) to remove slop (which would otherwise be >50% of my results for some searches). Sometimes I filter searches to before 2022.
If Google added a global "remove generative content" filter that worked, I would click it and then never unclick it.
I don't think I'm alone. There has been research suggesting that users immediately dislike content they perceive as AI-created, regardless of its quality. This creates an incentive for publishers to "humanwash" AI-written content—to construct a fiction where a human is writing the LLM slop you're reading.
Falsifying timestamps and hijacking old accounts to do this is definitely something I haven't seen before.
robswc|1 year ago
So far (thankfully) I've noticed this stuff get voted down on social media but it is blowing my mind people think pasting in a ChatGPT response is productive.
I've seen people on reddit say stuff like "I don't know but here's what ChatGPT said." Or worse, presenting ChatGPT copy-paste as their own. Its funny because you can tell, the text reads like an HR person wrote it.
Trasmatta|1 year ago
The ones that make me furious are on some of the mental health subreddits. People are asking for genuine support from other people, but are getting AI slop instead. If someone needs support from an AI (which I've found can actually help), they can go use it themselves.
kjs3|1 year ago
nullc|1 year ago
It makes me wonder how shallow a person's knowledge of all areas must be that they could use an LLM for more than a little while without encountering something where it is flagrantly wrong yet continued with its same tone of absolute confidence and authority. ... but it's mostly just a particularly aggressive form of Gell-Mann amnesia.
ijk|1 year ago
The other half of the problem is that rephrasing information doesn't actually introduce new information. If I'm looking for the kind of oil to use in my car or the recipe for blueberry muffins, I'm looking for something backed by actual data, to verify that the manufacturer said to use a particular grade of oil or for a recipe that someone has actually baked to verify that the results are as promised. I'm looking for more information than I can get from just reading the sources myself.
Regurgitating text from other data sources mostly doesn't add anything to my life.
tayo42|1 year ago
If llms could take the giant overwhelming manual in my car and get out the answer to what oil to use, that woukd be useful and not new information
MrPowerGamerBR|1 year ago
This reminds me of the time around ChatGPT 3's release where Hacker News's comments was filled with users saying "Here's what ChatGPT has to say about this"
nxobject|1 year ago
unknown|1 year ago
[deleted]
Lammy|1 year ago
[deleted]
Gracana|1 year ago
Sharlin|1 year ago
Hopefully to everyone on HN, but definitely not to everyone on the greater Internet. There are plenty of horror stories of people who apparently 100% blindly trust whatever ChatGPT says.
carlosjobim|1 year ago
scoofy|1 year ago
I basically decided that using AI content would waste everyone's time. However, it's a real chicken-or-egg problem in content creation. Faking it to the point of project viability has been a real issue in the past (I remember the reddit founders talking about posting fake comments and posts from fake users to make it look like more people were using the product). AI is very tempting for something like this, especially when a lot of people just don't care.
So far I've stuck to my guns, and think that the key to a course wiki is absolutely having locals insight into these courses, because the nuance is massive. At the same time, I'm trying to find ways that I can reduced the friction for contributions, and AI may end up being one way to do that.
kjs3|1 year ago
Of the top of my head I wonder if there's a way to have AI generate a summary from existing (on-line) information about a course with a very explicit "this is what AI says about this course" or some similar disclosure until you get 'real' local insight. No one could then say 'it's just AI slop', but you're still providing value as there's something about each course. As much as I personally have reservations about AI, I (personally, YMMV) am much more forgiving if you are explicit about what's AI and what's not and not trying to BS me.
nyarlathotep_|1 year ago
The general trend of viewing LLM features as forced against users' will and the now widespread use of "slop" as a derogatory description seems to indicate the general public is less enthusiastic about these consumer advances than, say, programmers on HN.
I use LLMs for programming (and a few other, general QA things before a search engine/wikipedia visit) but want them absolutely nowhere else (except CoPilot et al in certain editors)
nxobject|1 year ago
Aerroon|1 year ago
egypturnash|1 year ago
Sometimes this comment gets a ton of upvotes. Sometimes it gets indignant replies insisting it's real writing. I need to come up with a good standard response to the latter.
earnestinger|1 year ago
(I very much would like any AI generated text to be marked as such, so I can set my trust accordingly)
tayo42|1 year ago
Balgair|1 year ago
Reminds me of the old Yogi Berra quote: Nobody goes there anymore, its too crowded.
jchw|1 year ago
Self-Perfection|1 year ago
rapind|1 year ago
It's not just generated content. This problem has been around for years. For example, google a recipe. I don't think the incentives are there yet. At least not until Google search is so unusable that no one is buying their ads anymore. I suspect any business model rooted in advertising is doomed to the eventual enshitification of the product.
agumonkey|1 year ago
wait for the proof-of-humanity decade where you're paid to be here and slow and flawed
ijk|1 year ago
Once you have people sorting through them, editing them, and so on the curation adds enough additional interest...and for many people what they get out of looking at a gallery of AI images is ideas for what prompts they want to try.
onemoresoop|1 year ago
nxobject|1 year ago
plagiarist|1 year ago
bornfreddy|1 year ago
asddubs|1 year ago