top | item 47204253

(no title)

xomiachuna | 1 day ago

This is an article from 2024, when open weights models like llama were only beginning to emerge. With those you basically cannot reliably do any detection (as the authors admit by the end).

Which is really boiling down to text having statistically very similar properties to human generated one. Introduce a more motivated attacker and the text would be indistinguishable from real (with occasional typos, no use of "delve", "it's not x its y", emdashes and so on).

It really is a lost battle: you cannot embed extra information in the text that will survive even basic postprocessing (in contrast to, say, steganography)

discuss

order

piperswe|1 day ago

Ultimately it shouldn’t be too surprising that the machine that works by generating the most statistically likely text, generates text that’s statistically identical to human-generated text

userbinator|1 day ago

I've never seen the word "delve" show up with such frequency in the pre-AI era, but now it's an overwhelmingly large signal of LLM-generated text, so I'm not sure where that came from. Ditto for vomiting emojis everywhere.

lelanthran|22 hours ago

It's not statistically identical to human writing.

slopinthebag|1 day ago

I'm not so sure I buy that. AI written text is fairly obvious to good writers with exposure to LLM output. Is it a case where it's sort of an average of writing styles, but that average is not human and thus humans can detect it?

littlestymaar|23 hours ago

> the machine that works by generating the most statistically likely text

You've just described a “base models” (or pre-trained model), but later training stages (RLHF, GRPO, whatever secret sauce model makers use) induce a strong bias in the output.

Also, being “statistically identical to human generated text” doesn't mean it's unrecognizable, because human generated text exhibit many various clusters (you're not texting your friends with the same language you're writing a book with) and an LLM can, and in practice, do, use language that is not appropriate for the tone a human expects in a certain context (like when bots write LinkedIn-worthy posts in reddit comment section). The “average human-looking text” is as unnatural to us as a “synthetic average human” with one testicle and half a vagina would be.

nylonstrung|1 day ago

It sounds like a "cursed problem". Are there any contemporary techniques that show any promise?