(no title)
synapsomorphy | 2 months ago
Beyond these surface level tells though, anyone who's read a lot of both AI-unassisted human writing as well as AI output should be able to pick up on the large amount of subtler cues that are present partly because they're harder to describe (so it's harder to RLHF LLMs in the human direction).
But even today when it's not too hard to sniff out AI writing, it's quite scary to me how bad many (most?) people's chatbot detection senses are, as indicated by this article. Thinking that human writing is LLM is a false positive which is bad but not catastrophic, but the opposite seems much worse. The long term social impact, being "post-truth", seems poised to be what people have been raving / warning about for years w.r.t other tech like the internet.
Today feels like the equivalent of WW1 for information warfare, society has been caught with its pants down by the speed of innovation.
lapcat|2 months ago
Or rather by the slowness of regulation and enforcement in the face of blatant copyright violation.
We've seen this before, for example with YouTube, which became the go-to place for videos by allowing copyrighted material to be uploaded and hosted en masse, and then a company that was already a search engine monopoly was somehow allowed to acquire YouTube, thereby extending and reinforcing Google's monopolization of the web.
pixl97|2 months ago
cindyllm|2 months ago
[deleted]