top | item 19021924

(no title)

greymeister | 7 years ago

I find the methods listed on https://botsentinel.com/faq a bit confusing:

"Q: How do you determine which accounts are classified as fake news?

Classifying fake news accounts is a manual process. We review hundreds of tweets and retweets during the review process. If an account has a large number of followers and a high percentage of misleading and/or factually incorrect tweets, that account could be classified as Fake News."

"Q: Why is my account rated problematic or alarming?

Our machine learning model was developed to identify accounts that exhibit irregular tweet activity related to politics. The more you exhibit irregular tweet activity, the higher your trollbot score will be."

So how much is manual and how much is their model? By what criteria do the manual reviews judge tweets?

It all seems way too opaque without more information.

discuss

order

mc32|7 years ago

Totally. Are they sure thd training data aren’t biased? Are they considering the full spectrum of politics? Are they neutral to results?

beaconstudios|7 years ago

if you review the list of users on the list, it clearly is biased - I've only checked 10 random accounts but at least half of them appeared to be real, regular Republicans. I mean, the fact that botsentinel rate accounts as "problematic" is pretty amusing given that it's language associated with a particular political orientation.