top | item 42484913

(no title)

karczex | 1 year ago

After a while I started to think this article is AI generated gibberish. After deeper thought I came to conclusion that appearance of LLM made internet as source of information completely unreliable.

discuss

order

rhaps0dy|1 year ago

Maybe stuff like this was already gibberish before LLMs.

TeMPOraL|1 year ago

It was; people like to blame slop on AI like if it was new. It wasn't. Before current AI hysteria, we just called it content marketing.

So, if this text reads like a slop, consider perhaps it may be just personal promotion. I mean, "AI ethicist" with a book, denouncing experts for saying reasonable and arguably obvious things, because they violate some nebulous and incoherent "humanistic"[0] values? Nah, I think someone wants to sell us some more books.

--

[0] - Scare quotes because I actually like humanistic stuff; I just don't like stuff that doesn't make sense.

Lerc|1 year ago

The internet as a source of information was always unreliable. LLM's just made that more apparent.

It is like people consider Wikipedia unreliable because anyone can edit it, but multiple studies have shown that Wikipedia tends to be more accurate than sources such as encyclopedias.

This is not people being wary of unreliable material, but being wary when they can perceive the unreliability clearly.

I don't know if evaluating reliability by the lack of awareness of unreliability is a recognised fallacy, but I can't see any reason why it wouldn't be.

scotty79|1 year ago

I think it mostly just made you aware of unreliability.