Can someone in this space explain how this is a hard problem for Youtube to solve? In my limited understanding, I can see clear blockable patterns with the bot posts shown in this video.
It’s more that there is no function that discriminates against a given kind of spam without a false positive rate, and after you implement it, the scammers can just switch techniques to another while now you are continuously dealing with the false positive rate of your method. The attack surface is nearly the entire human language and we’re not good enough yet at understanding if it is a scam in a scalable, automated way, so we have to keep bolting on things with false positive rates that cause support tickets and lower engagement over time. This is an incredibly hard problem.
YouTube has no interest in improving it's UX.....it's only interested in politics.....and spam is on their side...because it increases their side channels.....
Seems like it'd be exceedingly well-suited to an ML model that's tuned by a neverending stream of data (positives, false positives, false negatives, etc). The cat-and-mouse game would still be there, but the "lag time" between shifts in spammer strategies and the model's ability to deal with them would presumably grow increasingly small over time, until eventually it would cease to be worth bothering with for many current spammers.
PuppyTailWags|3 years ago
- there isn't a lot of $$$ in squashing them, since it lowers engagement numbers (engagement inflated by bots is still engagement)
- the cost of hammering down on a real user is high in terms of PR, moreso than the cost of letting a bot continue running
- no one's making them and they have no real competitors in this space, so what does it matter? where are YouTube's customers gonna go?
asciimo|3 years ago
mattnewton|3 years ago
kderbyma|3 years ago
kitsunesoba|3 years ago