top | item 47161469

(no title)

godelski | 4 days ago

For me it's the constant feel of everything being "exciting" while no real information is actually conveyed. It's a common tactic of both AI and clickbaity articles. There's no hard evidence here, just hearsay. Nothing really to report until there's more information. I don't want drama in reporting, I want facts. But I guess I'm an outlier which is how we got both this AI style and the clickbait it was trained on...

It also doesn't help that all the title graphics have the same dramatic feeling and are certainly AI generated.

discuss

order

elcritch|3 days ago

> For me it's the constant feel of everything being "exciting" while no real information is actually conveyed. It's a common tactic of both AI and clickbaity articles.

Yes! You put your finger on what bugs me about "no LLM" rules. It's not that LLMs writing isn't uniquely bad, but that it tends towards low quality clickbaity prone writing we already see everywhere. Banning LLM content is redundant.

Side note, I'd guess LLMs don't tend towards vapid writing just because of clickbaity training material. Rather it's more fundamental. Writing well takes effort and energy. LLMs seem to avoid effort just like humans. Emotional based reasoning in humans is itself a heuristic system favored by evolution. Thinking is expensive. Emotional slop is cheap.

whatshisface|3 days ago

Here's why no LLM rules make sense:

Imagine you know a guy named Patel. He pirated every movie ever made and is a prolific writer. So prolific, in fact, that he has a blog, called "Patel's Log." On this blog is a review of every movie ever made.

At first, you think that's neat. It's not exactly a book of all knowledge, but it's a significant human achievement, perhaps even historic.

Things take a turn for the worse when you're reading a review in the Times. You recognize Patel's distinctive style, and call him up to ask if the Times stole his post. He says that a Times columnist asked for his opinion, and he sent them a link. It turns out the columnist copied his blog post verbatim: but he says he can't complain without being inconsistent, since he pirated every movie ever made.

You find this humorous, until you recognize his style in the Atlantic - then the Post. Eventually you're disappointed when the Ebert staff publish an opinion piece in favor of Patel's Log matching (PatelLM), and you're forced to wonder if that' what Ebert would have thought.

Your boss sends you copy-pasted PatelLM content in a morning Slack message about a movie she watched over the weekend. Your friends quote Patel's Log verbatim on Discord. Hollywood starts using PatelLM to indirectly plagiarize other movies. Soon, Patel's posts begin to echo each other as the supply of novel perspectives is overwhelmed by PatelLM. Film criticism become a dessicated corpse, filled with plastic and presented in a glass case with a pin through its heart. Thought is dead. There is only Patel.

godelski|3 days ago

I've had a similar complaint about publishing in machine learning conferences. They're putting in these "no LLM" rules but those are just idiotic. Proving LLM usage is really really difficult, but (one of) the underlying problem has always been bad reviews, or low quality reviews. So why write a LLM rule? Why not tackle the problem more directly?

I don't care if people use LLMs, I care about generating slop. The two correlate, but by concentrating on the LLM part you just let the problem continue. It's extremely frustrating. Slop is slop. Doesn't matter how it is generated or by who. Slop is slop. Doesn't matter if you dress it up and put lipstick on a pig. Slop is slop.