It would be nice if there were an easier way to detect and filter those "reply guys." If LLMs were forced to watermark their output (possibly by using randomly-selected nonstandard ASCII characters in inconspicuous places, like "s" instead of "s") it would have been trivial, but that ship has sailed. The most anybody can do is train another LLM to find offenders and make a list. Bot vs bot.
ossa-ma|5 days ago
I built this tool primarily to identify AI writing in articles and posts but it's proven useful for comments/responses too: https://tropes.fyi/vetter
KoolKat23|5 days ago
tharkun__|5 days ago
bambax|5 days ago
A_D_E_P_T|5 days ago
"Respond within 4-12 hours."
"Do not respond between midnight and 6am EST." (Or CET, whatever makes sense.)
Right now the most obvious traits are the well-known ones that are hard for most LLMs to shake off. Em-dashes, word choices, and the very limited ways in which they structure sentences. Terseness and conciseness is also a tell, which sucks.
tartuffe78|5 days ago
numpad0|5 days ago
They don't do that because spams are their means to achieve something else, specifically to get rid of left wing tech anime porn otakus. The comedy of that is that they've been attempting this by complicating the system, which is like reverse chemotherapy that are nicer to cancer tissues than to the body so that cancer grows faster. I guess they take that as a win as it's a positive action with positive reaction albeit with negative amounts in lieu of negative action with negative reaction with a positive amount.
What's really going to be nice is Twitter transferred to someone else. That will at least stop the stupidity of reverse chemotherapy.