It would be nice if there were an easier way to detect and filter those "reply guys." If LLMs were forced to watermark their output (possibly by using randomly-selected nonstandard ASCII characters in inconspicuous places, like "s" instead of "s") it would have been trivial, but that ship has sailed. The most anybody can do is train another LLM to find offenders and make a list. Bot vs bot.
Yeah exactly, it's best to keep track and be aware of common tropes used in AI writing so that you don't end up 5 responses deep and emotionally invested in a conversation before you realise you've been fooled into speaking to a bot.
I built this tool primarily to identify AI writing in articles and posts but it's proven useful for comments/responses too: https://tropes.fyi/vetter
I'm sure there are other tells, like delay between post and reply, or time of day, etc. Epidemiology of bots is just getting started but the tools have to have detectable patterns.
They wouldn't have problems telling apart bots and spammers from regular user activities. Lots of them still have problems just interpreting tweets and their replies make no sense. Just removing out-of-place replies using ML will fix most of problems, or even just restricting mass registrations from narrow ranges of IPs.
They don't do that because spams are their means to achieve something else, specifically to get rid of left wing tech anime porn otakus. The comedy of that is that they've been attempting this by complicating the system, which is like reverse chemotherapy that are nicer to cancer tissues than to the body so that cancer grows faster. I guess they take that as a win as it's a positive action with positive reaction albeit with negative amounts in lieu of negative action with negative reaction with a positive amount.
What's really going to be nice is Twitter transferred to someone else. That will at least stop the stupidity of reverse chemotherapy.
> Moving forward, replies via the API will only be permitted if the replier has been explicitly summoned by the original post’s author. This means:
The original author @mentions the replying user/account in their post, or
The original author quotes a post from the replying user/account.
Back when I first heard the term "Dead Internet Theory" I thought it was silly, because to that time language generation wasn't really as sophisticated. But nowadays it is really more and more difficult to know.
I've noticed that I've recently (had the urge to and) spent a lot more time with people in real life, not sure if there is a causative effect. The illusion of social interaction on the internet is fading.
When I look at sites like Reddit I have a strong feeling, at least with some of the bigger subs, that there's definitely a substantial percentage of bots talking to each other there. More on some subs, less on others. Definitely on the political ones.
The problem is trust on most sites is attributed to account history, which is cheaper than ever with these reply-guy services. Twitter/Meta verified badges help, but IMHO the only solution is something invite-only like lobsters, where you can easily weed out invite-rings etc...
If you follow the link to the tweet but don't have an account there you'll miss a joke, because Twitter doesn't show threaded replies to logged out users. The xcancel link shows it. Here's the two tweet sequence:
> AI-generated replies really are the scourge of Twitter these days. Anyone know if it's from packaged solutions being sold as a product or if it's people mainly rolling their own custom reply-bots
> ... and I just found out the category name for this is "reply guy" tools which is so on the nose it hurts
(You can confirm this by Google searching "reply guy service".)
One needs to consider why the usage of automated responses.
Is it engagement drive? Is it inflating metrics? Is it manipulation? I do not see a scenario where it is purely done because someone wants to be nice.
1. Get more followers. A lot of people see follower count as a goal that matters to them. Replying to high follower counts may earn you a follow from them or from someone reading their replies who doesn't catch that you are a bot.
2. Establish account credibility. Does Twitter's algorithm rank posts higher from accounts that have a long history of engaging with other accounts? I don't know for sure, neither do they but they may believe it's worth trying anyway.
3. Accounts for sale. There's a market for used Twitter accounts with plenty of realistic looking activity. Maybe these spammers are building inventory.
So, one of the main problems Elon promised to solve is rampant since his takeover. Even before "AI wave".
I still don't understand why people use his platform and give him power he has, and we have seen that he is using that to reduce children's access to food, promote people who are examples of no ethics whatsoever and is actively working on destroying numerous democracies by spreading propaganda from right wing.
One thing giving him power to do this are users of his platforms, and anyone still on Twitter is contributing to this.
It's ridiculously toxic. If you do not wish to participate in any form of internet cultural wars or politics it is virtually not possible there. For me the feed is mainl ridiculosuly stupid russian propaganda or politicians tilting each other. The "Do not recommend" button does nothing.
The problem is that he doesn't care about the money, so he can fuel his rage bait machine as long as he wants which would be normally not possible.
Just had a colleague discover how to copy paste ChatGPT output into teams this morning. So now I’m getting fed whatever semi relevant gibberish she gets out of her LLM (and likely didnt even read herself)
FML we better develop social norms around this asap because this fuckin blows
Eh, I am kind of liking the pasting back and forth of replies or Git comments. It means that they can indulge their little whims and fussiness about variable names or whether something is an edge case and I don't need to build in delays to frustrate them to go away.
AI in the middle makes colleagues more tolerable if you didn't really get along with them well originally.
I love AI-generated replies. I use it on all cold mailers who try to sell me shit. I just tell the AI to give me a one a4 response, and to gently string them along with vague interest, but not committing to anything.
The more determined salesmen last for 3-4 emails, but most drop off after 2 or so.
I've been trying to will a web of trust style system into existence for a while now, I lack both the marketing skills and programming know-how to actually create it though =)
Basically a way to see on every web page whether an actual human (or more) in your network has vouched for the content to be written by a person.
A crazy thought I had is that agents without a link to human identity might need to be treated as illegal. That human identity would be blamed the for the agent's actions.
This raises a rats nest of issues, but will we be able to avoid this necessity?
I think you've just thought of CAPTCHAs? Unfortunately, AI have increasingly become better than humans at solving the tasks we throw at them for such tests.
And how would you do that without dystopian verification checks?
The reasons why Youtube and Discord are so gung ho on age verification might be because these companies that sell ads and data have a monetary incentive for distinguishing humans from bots for their investors and shareholders.
If I were to chose I'd rather have a bot infested internet than a mass surveillance dystopia.
now ive been wondering - what is the polite way to exit a conversation when it becomes obvious that your fellow interlocutor is merely a chunk of electric meat redirecting the output of sam altman? im talking blatantly obvious eg. 'its not x, its y' multiple times in the same paragraph.
What an odd question. If the other entity is an AI, there is no need to be polite.
But personally, if I get value out of a conversation, I will continue. If I don't, I'll stop responding. Whether or not the other side is an AI is only relevant if I think I'm building some kind of rapport or friendship with someone. Otherwise what matters is if the comments makes me think, or makes me want to write something. If only AI bots were reading the comments, that would be a bigger issue than if the specific comment I'm replying to is AI-written.
I find it odd that, when it comes to natural language, we all agree that the LLM is stuck in an uncanny valley, yet no one is acknowledging that the code it generates has a similar alien feel to it.
I don't think this is productive. You can already adjust the style of LLMs and it's only going to get better over time. Any tool or strategy you come up with for detecting a bot can then be turned into an generative adversarial network to effectively create a system that breaks the tool.
The bots are going to win this war. I'm not sure of the implications of what this means though.
Given that you're citing Wikipedia on this, the issue of detecting and fighting auto-generated slop in articles is actually quite fascinating.
There was a really interesting talk given by Mathias Shindler (long time editor of German Wikipedia) at the 39C3 conference about this topic a few months back that is worth a watch for anyone interested in the issue: https://youtu.be/fKU0V9hQMnY
Don't believe it, MY FELLOW OXYGEN CONVERTING FRIENDS! This is just outrageous conspiracy-theory-nonsense! This person is clearly and obviously a botist attempting to create a narrative that makes artificial intelligence look bad!
I, A GENUINE FELLOW HUMAN, just like yourselves, have not ever noticed any replies written by any so called scripts, bots, robots, AI, LLMs anywhere!
Frankly, I think AI-generated content is the least of Twitter's concerns ... I'd wager it is actually raising the average quality of content over there.
The dead internet theory is fairly rapidly happening. More and more of the content has been at least significantly produced by AI and its only going to get worse.
A corollary of the dead internet theory is the phenomenon where people suspect any content to be AI generated. Sometimes one em dash is enough to spark such suspicions and allegations. Not only is fake content falsely labeled as real, real content is increasingly falsely labeled as fake.
A_D_E_P_T|5 days ago
ossa-ma|5 days ago
I built this tool primarily to identify AI writing in articles and posts but it's proven useful for comments/responses too: https://tropes.fyi/vetter
bambax|5 days ago
tartuffe78|5 days ago
numpad0|5 days ago
They don't do that because spams are their means to achieve something else, specifically to get rid of left wing tech anime porn otakus. The comedy of that is that they've been attempting this by complicating the system, which is like reverse chemotherapy that are nicer to cancer tissues than to the body so that cancer grows faster. I guess they take that as a win as it's a positive action with positive reaction albeit with negative amounts in lieu of negative action with negative reaction with a positive amount.
What's really going to be nice is Twitter transferred to someone else. That will at least stop the stupidity of reverse chemotherapy.
dewey|5 days ago
https://devcommunity.x.com/t/update-to-reply-behavior-in-x-a...
> Moving forward, replies via the API will only be permitted if the replier has been explicitly summoned by the original post’s author. This means: The original author @mentions the replying user/account in their post, or The original author quotes a post from the replying user/account.
fooker|5 days ago
Google has spent billions trying to distinguish bots from users. And has been largely unsuccessful n
croes|5 days ago
sva_|5 days ago
I've noticed that I've recently (had the urge to and) spent a lot more time with people in real life, not sure if there is a causative effect. The illusion of social interaction on the internet is fading.
When I look at sites like Reddit I have a strong feeling, at least with some of the bigger subs, that there's definitely a substantial percentage of bots talking to each other there. More on some subs, less on others. Definitely on the political ones.
triage8004|5 days ago
zipy124|5 days ago
simonw|5 days ago
> AI-generated replies really are the scourge of Twitter these days. Anyone know if it's from packaged solutions being sold as a product or if it's people mainly rolling their own custom reply-bots
> ... and I just found out the category name for this is "reply guy" tools which is so on the nose it hurts
(You can confirm this by Google searching "reply guy service".)
da_grift_shift|5 days ago
I read the whole thread and there's no joke here.
AI-generated replies from bots really are the scourge of HN these days.
Anyone know if it's from packaged solutions being sold as a product or if it's people mainly running their own custom Claws?
PacificSpecific|5 days ago
motbus3|5 days ago
simonw|5 days ago
1. Get more followers. A lot of people see follower count as a goal that matters to them. Replying to high follower counts may earn you a follow from them or from someone reading their replies who doesn't catch that you are a bot.
2. Establish account credibility. Does Twitter's algorithm rank posts higher from accounts that have a long history of engaging with other accounts? I don't know for sure, neither do they but they may believe it's worth trying anyway.
3. Accounts for sale. There's a market for used Twitter accounts with plenty of realistic looking activity. Maybe these spammers are building inventory.
elAhmo|5 days ago
I still don't understand why people use his platform and give him power he has, and we have seen that he is using that to reduce children's access to food, promote people who are examples of no ethics whatsoever and is actively working on destroying numerous democracies by spreading propaganda from right wing.
One thing giving him power to do this are users of his platforms, and anyone still on Twitter is contributing to this.
hsuduebc2|5 days ago
The problem is that he doesn't care about the money, so he can fuel his rage bait machine as long as he wants which would be normally not possible.
Havoc|5 days ago
FML we better develop social norms around this asap because this fuckin blows
fooker|5 days ago
somenameforme|5 days ago
throwawaysleep|5 days ago
AI in the middle makes colleagues more tolerable if you didn't really get along with them well originally.
abc123abc123|5 days ago
The more determined salesmen last for 3-4 emails, but most drop off after 2 or so.
PacificSpecific|5 days ago
Especially for my parents who are getting targeted like crazy by telemarketers
bakugo|5 days ago
owebmaster|5 days ago
DeathArrow|5 days ago
pjc50|5 days ago
sph|5 days ago
SanjayMehta|5 days ago
theshrike79|4 days ago
Basically a way to see on every web page whether an actual human (or more) in your network has vouched for the content to be written by a person.
consumer451|5 days ago
This raises a rats nest of issues, but will we be able to avoid this necessity?
ben_w|5 days ago
fooker|5 days ago
I wonder if it is possible at all to have anonymity without admitting bots.
irusensei|5 days ago
The reasons why Youtube and Discord are so gung ho on age verification might be because these companies that sell ads and data have a monetary incentive for distinguishing humans from bots for their investors and shareholders.
If I were to chose I'd rather have a bot infested internet than a mass surveillance dystopia.
LZ_Khan|5 days ago
Those are probably replies crafted by non-English speaking scammers from India / Russia / China.
There's probably a whole sea of undetectable replies from people who know how to prompt the models properly.
villgax|5 days ago
Aeglaecia|5 days ago
a great link to share around !
now ive been wondering - what is the polite way to exit a conversation when it becomes obvious that your fellow interlocutor is merely a chunk of electric meat redirecting the output of sam altman? im talking blatantly obvious eg. 'its not x, its y' multiple times in the same paragraph.
Leynos|5 days ago
vidarh|5 days ago
But personally, if I get value out of a conversation, I will continue. If I don't, I'll stop responding. Whether or not the other side is an AI is only relevant if I think I'm building some kind of rapport or friendship with someone. Otherwise what matters is if the comments makes me think, or makes me want to write something. If only AI bots were reading the comments, that would be a bigger issue than if the specific comment I'm replying to is AI-written.
lelanthran|5 days ago
> a great link to share around !
I find it odd that, when it comes to natural language, we all agree that the LLM is stuck in an uncanny valley, yet no one is acknowledging that the code it generates has a similar alien feel to it.
somenameforme|5 days ago
The bots are going to win this war. I'm not sure of the implications of what this means though.
theshrike79|5 days ago
Kinda similar to the ye olde newsgroup custom of replying "plonk" when you add someone to your killfile.
KvanteKat|5 days ago
There was a really interesting talk given by Mathias Shindler (long time editor of German Wikipedia) at the 39C3 conference about this topic a few months back that is worth a watch for anyone interested in the issue: https://youtu.be/fKU0V9hQMnY
BoredPositron|5 days ago
5o1ecist|5 days ago
I, A GENUINE FELLOW HUMAN, just like yourselves, have not ever noticed any replies written by any so called scripts, bots, robots, AI, LLMs anywhere!
https://old.reddit.com/r/totallynotrobots
LightBug1|5 days ago
KoolKat23|5 days ago
curiousObject|5 days ago
This is a complex problem. But the first step of that problem is Twitter/X
Avoid it, and the next step toward a solution may be easier.
Gigachad|5 days ago
amelius|5 days ago
bambax|5 days ago
PaulKeeble|5 days ago
mr_mitm|5 days ago
oblio|5 days ago
zombot|5 days ago
benterix|5 days ago
theshrike79|5 days ago
[deleted]
zombot|5 days ago
You shouldn't believe everything you read on the internet.
moomoo11|5 days ago
If you bought into that, then congrats he sold you.
lapcat|5 days ago
He says a lot of shit.
Robots are the new cars. The Moon is the new Mars. Turn, turn, turn.
webdevver|5 days ago
iberator|5 days ago
If you made 50.000 AI slop comments then it would be possible to prosecute and PROVE it in court.
Just because it's hard doesn't mean that we should accept it.
The same goes with CHEATERS using AI at universities.
If caught with solid evidence: it should be like 5 years in jail. That would stop 90% of CHEATERS.