For the convenience of those who can't conveniently read a PDF on their current device:
> The role that YouTube and its behind-the-scenes recommendation algorithm plays in encouraging online radicalization has been suggested by both journalists and academics alike. This study directly quantifies these claims by examining the role that YouTubes algorithm plays in suggesting radicalized content. After categorizing nearly 800 political channels, we were able to differentiate between political schemas in order to analyze the algorithm traffic flows out and between each group.
> After conducting a detailed analysis of recommendations received by each channel type, we refute the popular radicalization claims. To the contrary, these data suggest that YouTubes recommendation algorithm actively discourages viewers from visiting radicalizing or extremist content. Instead, the algorithm is shown to favor mainstream media and cable news content over independent YouTube channels with slant towards left-leaning or politically neutral channels. Our study thus suggests that YouTubes recommendation algorithm fails to promote inflammatory or radicalized content, as previously claimed by several outlets.
Whether it works one way or the other, I find what it recommends to me very disturbing.
Example: I'm a former rock climber and am very interested in knot and knot-tying. I watched some vids on hemp ropes, including "braiding" strands of rope. That points me to a video on hair braiding, which I watch as I do find it mildly interesting, a knot system that won't bind into a mess. Now youtube is recommending me very odd videos of little girls doing makeup and gymnastics. And all the static ads are suddenly only dating aps, "find flirty women in your area" junk.
What do I need to watch to get out of this inappropriate category?
> over independent YouTube channels with slant towards left-leaning or politically neutral channels.
Why were they explicitly listing this channel types not including right-leaning? From what I heard there indeed is no problem with left radicalization but with right radicalization and mind fuckery through conspiracie channels and similar.
I don't entirely understand the linked chart here.
But best I can make of it, has 1/4 the # of impressions as MSM and almost half the impressions for either partisan left or right.
This is a minor issue that is showing impression numbers the equivalent of entire media organizations. How is this evidence that Youtube's algorithm does not promote radicalized content?
This quote raises questions:
"The scraped data, as well as the YouTube API, provides usa view of the recommendations presented to an anonymous account. In other words, the account has not ”watched” any videos, retaining the neutral baseline recommendations, described in further detail by YouTube in their recent paper that explains the inner workings of the recommendation algorithm[38]. One should note that the recommendations list provided to a user who has an account and who is logged into YouTube might differ from the list presented to this anonymous account."
Many discussions of radicalization talk about recommendations coming after someone has watched a number of videos.
This situation is that after N videos, the algorithm starts to see a pattern, a niche, that the user fits into and then recommends more niche content. Once the user watches the niche content, just more niche content is recommended, or that-niche and mainstream content is recommended and the mainstream content is boring.
In a sense, I don't see how a blind recommendation system could escape this situation. It seems more an inherent problem of blind recommendation systems.
What could be a better approach?
* Better education in the US and the world so people have BS detectors?
* Instead of unlabeled recommendations, the algorithm categorizes what you're seen, categorizes what it's recommending and let's the user say what category they want and remove or add categories for recommendation.
* Have more ordinary search tool and a better categorization system (Youtube is a horrible black-box now, search is terrible, recommendations is terrible, I get videos from other sites mostly).
Have the recommendation system mix it up, put niches that are somewhat similar in the recommendations not quite as often, put things a bit further away less often, etc. This should have the algorithm recommend counterpoints which are clearly visible to the user.
This is assuming things that are partisan one way are similar to things that are partisan in the other way and counter partisan content.
Avoid censorship, censorship is too easily abused.
Difficult to take this analysis seriously when CNN is categorized with The Young Turks as “Partisan Left.” Regardless of what you think of either channel the programming of these two channels is dramatically different.
The fact that the tags were manually created by the experimenters and were manually assigned by the experimenters throws the results even more into question.
I don't follow your logic. Yes, CNN and TYT are very different. This only means they can be tagged differently, but it doesn't mean they cannot be tagged under same category.
For example, The Selfish Gene and Advanced Algebra are two very different books, but they can be both categorized as nonfiction.
Maybe you are right with the conclusion, i.e. CNN or TYT should not be both categorized as “Partisan Left,” but you need a better reason.
Its interesting seeing how arbitary all points are along the 'Overton window'; I'm not from the US so CNN strikes me as center right wing, low regulation, anti-union and moderately pro war etc.
This illustrates the danger of ranking everything relatively... if enough people watch infowars like content then every university will be marked as radical left wing etc. There needs to be some way of anchoring the spectrums in actual facts or we're firmly in the 'post truth' era.
CNN and TYT are both partisan left. They both published content showing extreme dismay at the election of Trump. That's fine, they're allowed to do that. This just shows their political bias. The same way Fox was airing pro-Trump things at the same time. This isn't a dispute over whether they are right, just over what type of content they publish. I see no problem putting CNN and TYT on the same side of the political spectrum.
EDIT: Downvotes? Give me a counter example that shows how TYT or CNN is not left-leaning.
I think this study is testing the wrong hypothesis.
Their idea is that people get "radicalized" by being recommended extremist content after watching mainstream news videos. ie, fox news -> 911 conspiracy videos. They demonstrate that this doesn't happen often.
But the real problem wirh youtube is intracategory recommendations. If someone watches one conspiracy video, then their recommendations become all conspiracy videos which inevitably leads to them consuming more and more far right content.
This seems like good science and well conducted research. I just don't know if they had the correct view of the problem
The 9/11 event is by definition the outcome of a conspiracy. The parties to the conspiracy and the methods employed are where some disagree with the official analysis. Regardless of your political stance, do your fellow human beings a credit and acknowledge that it wouldn't be the first time a government lied to it's people and it won't be the last.
Also it's kind of a weird take that extremism == far right. It seems like far left would be included, again as a matter of definition.
If someone watches one conspiracy video, then their recommendations become all conspiracy videos which inevitably leads to them consuming more and more far right content.
Just in case you didn't know, conspiracy theories aren't an exclusively right wing thing.
We can keep doing study after study but the kernel of all of this seems to be an inability of certain academics and tech industry titans to understand that not everyone thinks alike them or holds the same values. There is an undercurrent in all of this confusion that if we can just control what others are and hear, we can make them believe the exact same things we believe. Afterall, we are right, they are wrong, so the error must be due to some contamination of their minds by dangerous content. Eliminate access to that content and all will be well. Banning wrong think from YouTube might be good for advertisers and Google's bottom line but it's not going to make thoughts that have been around since the dawn of humanity magically disappear.
> There is an undercurrent in all of this confusion that if we can just control what others are and hear, we can make them believe the exact same things we believe.
I mean, that’s how advertising works and most of these businesses are ads funded.
Vaccines work; the safety risks are minimal; there is no reliable evidence that they cause autism and the primary proponent of the theory was eventually struck off for malpractice. Getting this wrong will cause the unnecessary suffering and death of small children.
The earth is spherical, and this has been known since ancient times. Some things are not a matter of opinion.
I'm curious why all submissions about YouTube manipulating their recommendation algorithm gets flagged? I've submitted lots of links to HN, most of which never gets any points of course, but only those about YouTube have ever been flagged:
I can speculate that this particular submission might be flagged because it goes against anti-Youtube propaganda that some HN-popular companies are participating in, like Mozilla. And, you know, people love to suppress opposing views to their propaganda. But of course there might be more specific organized effort too.
Does Youtube radicalize? Edward Snowden made an observation during his Joe Rogan interview that compartmentalization within the CIA is necessary so that no person can see all the bad stuff at once and freak out at its total enormity. As a sysadmin, he was in effect radicalized by seeing the bigger picture at NSA that wasn't visible to other analysts.
I think Youtube shows you an analogous bigger picture that makes the dominant media narratives seem fabricated and dishonest when viewed together, even if individually most of it is sincere (if dumb) reporting.
The discourse around radicalization has also been abused by editors and academics with agendas to de-normalize formerly moderate views, and who treat the interests of regular people as beneath discourse. So it's hard to take any discussions of radicalization seriously, even ones questioning whether it's even a thing, as you still have to acknowledge the nonsense it has been freighted with first.
FWIW, I'd note that Snowden was a CIA contractor, not operative or spy, so his understanding of why compartmentalization is done should maybe be taken with a grain of salt.
(For example, another---and if I understand correctly, the main---reason compartmentalization is done is so a single compromised agent can't give "the whole farm" to an enemy intelligence organization. It's not about protecting agents from 'the real story,' but from making sure the enemy doesn't have a picture of the agency's understanding of 'the real story').
This applies every bit as much to Google's web search as it does YouTube search.
I know of many marginalized people targeted by online hate groups that have been smeared with proven-false allegations and doxxed online, and these stalking websites are consistently ranked as the top results in Google search and Google Images search results for their names.
Google's AI is clearly trained to recognize and promote controversy, because it is human nature for controversy to drive engagement.
I've been convinced for a while now that Facebook tries to start fights between people of opposite ideology. I'm sure it's just that their algorithms have picked up that pro-skub people are highly likely to reply to anti-skub comments in pro-skub threads (since that was the pattern I was seeing) but it sure comes across as trying to cause drama.
What applies? The submission concludes the opposite of what you're describing.
> To the contrary, these data suggest that YouTubes recommendation algorithm actively discourages viewers from visiting radicalizing or extremist content.
This is a pretty clever way to identify natural recommendation levels. I wish the authors had given a bit more justification of why net impression flows are a good metric, but I don't want to quibble too much, because it seems pretty reasonable and it's certainly a vast improvement over what I was afraid I'd find going in.
> However, it seems that the company will have to decide first if the platform is meant for independent YouTubers or if it is just another outlet for mainstream media.
> with slant towards left-leaning or politically neutral channels
So ... YouTube is actually politically biased? This would be an interesting study to continue. You would need to normalise for view counts etc. since I suspect there is a bias just in terms of numbers. The biggest channels are more progressive, left leaning, so YouTube could appear left-biased simply by recommending most-viewed videos.
I devised a simple test for this. After the last US election there was some analysis and discussion of Google's youtube recommendations engine which appeared to show a strong bias towards conspiracy videos, which in turn led to videos that generally promoted Trump over Clinton. Now, I'm not American, and don't care who your president is, but it was around this time that I realised youtube's search engine had become pretty useless for me. Since then I only use youtube for watching tutorials or technical talks (and old TV shows that are hard to find elsewhere). I try to avoid the search engine and recommendations altogether.
Ok so that's the why, now what's this simple test? Just search youtube for CERN and note the mix of results. I found that most of the results could be categorised as either pro-science or anti-science/conspiracy. In an ideal world I would expect to see the official CERN channel at the top, followed by sciency videos about CERN, and finally the fringe conspiracy videos about wormholes to other dimensions and such. It's fun to use tor to see how the results vary by country. The last time I did this was 2017, and I don't have the data to hand, but it was roughly 70-80% of results in the first few pages where conspiracy related.
I just had a quick look now and my first impression is that results have certainly changed and improved, but it's a mixed bag. I will find my old data and update it soon.
I see the term radicalization is being brought up, do we serve ourselves by focusing on superficial presentations that won't be the same in 10 years rather than how similar everyone is: Feeling enlightened, competitive and lack consideration that their perceived "ideological adversary" is every bit vulnerable, needy, and scared as themselves?
How is this different from couple's therapy, just at long distances w/ groups?
None of them want to have a picnic and cooperate with each other. And who could blame them? They both fail to recognize each others hardships, yet "get" the human condition better than the other so much better. They begin by insinuating there's another group that has a character flaw, and is so angry. Yet - all the while, they're angry and accusing (https://en.wikipedia.org/wiki/Projective_identification)
I have a hypothesis: It's all acting out, cathartic, to blow off steam. They're not at the soup kitchen or volunteering. They have a dysfunctional coping mechanism stemming from earlier traumas, and it's more profitable to captivate lonely, bored people by stir up people's anxiety existentially than help them find common ground.
Because if people realized the common ground they shared and cooperated, people would start to pass laws and regulations to make healthcare, employment, housing, education for more fair for legal persons. The whole concept of political sides is a sham: They are legal persons and https://en.wikipedia.org/wiki/Maslow%27s_hierarchy_of_needs.
If common sense stuff isn't being fixed and people are bikeshedding: the fix is in, people in suits are giving each other high fives and laughing at you. You're being suckered into squandering your political rights (you voted, or can!) to take worthless, symbolic digs at people rather than get what you need, better laws for the practical issues everyone shares. Hint: they tend to be boring.
> My new article explains in detail. It takes aim at the NYT (in particular, @kevinroose) who have been on myth-filled crusade vs social media. We should start questioning the authoritative status of outlets that have soiled themselves with agendas.
From the linked Medium article:
> These events, along with the promotion of the now-debunked YouTube “rabbit hole” theory, reveal what many suspect — that old media titans, presenting themselves as non-partisan and authoritative, are in fact trapped in echo chambers of their own creation, and are no more incentivized to report the truth than YouTube grifters.
The paper itself makes a fundamental flaw of using logged out recommendations and attempts to disprove 2018 algos even after substantial changes since, which invalidates the research entirely.
Less of an agenda than the aforementioned NYT article.
Sure, this research doesn't use 2018 algos, because it's 2019. I guess the NYT and others should've done the research in the first place then. It certainly doesn't invalidate the research - it shows that the claims made by others are probably not accurate in 2019 on YT.
It makes me wonder how one would research Youtube algorithms from the outside.
Perhaps freshly imaged computers in random geographical locations periodically running selenium or other browser automation software, recording which recommendations are made for various viewing preferences?
The problem is A) the study could be invalidated by basic bot prevention (from identifying Selenium, identifying fresh accounts, etc) and B) Youtube actively preventing such research with "new" accounts and C) lack of sufficient scale could skew the results.
Ideally Youtube would partner with a transparent auditor and have internal teams work with the auditors, but there's no way Google agrees to that because it could potentially unearth some very bad practices by the Youtube team (if they do exist...).
> It should be noted that this research was likely published with a specific agenda in mind:
Just like the previous "study" which claimed everybody not part of the corporate media were alt-right.
These types of "research" are funded and run solely for an agenda. Primarily to push social media platforms to direct traffic to corporate media.
Also, radicalized is such an interesting word. If the saudis ran the exact same study with their definition of "radicals", I wonder how that would turn out? What if the chinese or the russians? Are their radicals the same as ours? Also, are the liberal elites' definition of radical the same as conservatives or the general population at large?
At the end of the day, it's all about "authorative" sources - an orwellian and creepy word if there ever was one.
[+] [-] dfabulich|6 years ago|reply
> The role that YouTube and its behind-the-scenes recommendation algorithm plays in encouraging online radicalization has been suggested by both journalists and academics alike. This study directly quantifies these claims by examining the role that YouTubes algorithm plays in suggesting radicalized content. After categorizing nearly 800 political channels, we were able to differentiate between political schemas in order to analyze the algorithm traffic flows out and between each group.
> After conducting a detailed analysis of recommendations received by each channel type, we refute the popular radicalization claims. To the contrary, these data suggest that YouTubes recommendation algorithm actively discourages viewers from visiting radicalizing or extremist content. Instead, the algorithm is shown to favor mainstream media and cable news content over independent YouTube channels with slant towards left-leaning or politically neutral channels. Our study thus suggests that YouTubes recommendation algorithm fails to promote inflammatory or radicalized content, as previously claimed by several outlets.
Tweet from the author:
https://twitter.com/mark_ledwich/status/1210743168246771716
> It turns out the late 2019 algorithm DESTROYS conspiracy theorists, provocateurs and white identitarians
> Helps partisans
> Hurts almost everyone else.
[+] [-] falcolas|6 years ago|reply
Asserting that "mainstream media and cable news content" is neither inflammatory nor radicalized. I'm not sure how accurate this statement is today.
[+] [-] sandworm101|6 years ago|reply
Example: I'm a former rock climber and am very interested in knot and knot-tying. I watched some vids on hemp ropes, including "braiding" strands of rope. That points me to a video on hair braiding, which I watch as I do find it mildly interesting, a knot system that won't bind into a mess. Now youtube is recommending me very odd videos of little girls doing makeup and gymnastics. And all the static ads are suddenly only dating aps, "find flirty women in your area" junk.
What do I need to watch to get out of this inappropriate category?
[+] [-] baking|6 years ago|reply
[+] [-] dathinab|6 years ago|reply
Why were they explicitly listing this channel types not including right-leaning? From what I heard there indeed is no problem with left radicalization but with right radicalization and mind fuckery through conspiracie channels and similar.
I guess I have to read that article
[+] [-] jdjdjjsjs|6 years ago|reply
But best I can make of it, has 1/4 the # of impressions as MSM and almost half the impressions for either partisan left or right.
This is a minor issue that is showing impression numbers the equivalent of entire media organizations. How is this evidence that Youtube's algorithm does not promote radicalized content?
[+] [-] cm2012|6 years ago|reply
[+] [-] joe_the_user|6 years ago|reply
Many discussions of radicalization talk about recommendations coming after someone has watched a number of videos.
This situation is that after N videos, the algorithm starts to see a pattern, a niche, that the user fits into and then recommends more niche content. Once the user watches the niche content, just more niche content is recommended, or that-niche and mainstream content is recommended and the mainstream content is boring.
In a sense, I don't see how a blind recommendation system could escape this situation. It seems more an inherent problem of blind recommendation systems.
What could be a better approach?
* Better education in the US and the world so people have BS detectors?
* Instead of unlabeled recommendations, the algorithm categorizes what you're seen, categorizes what it's recommending and let's the user say what category they want and remove or add categories for recommendation.
* Have more ordinary search tool and a better categorization system (Youtube is a horrible black-box now, search is terrible, recommendations is terrible, I get videos from other sites mostly).
[+] [-] pitay|6 years ago|reply
This is assuming things that are partisan one way are similar to things that are partisan in the other way and counter partisan content.
Avoid censorship, censorship is too easily abused.
[+] [-] faizshah|6 years ago|reply
The fact that the tags were manually created by the experimenters and were manually assigned by the experimenters throws the results even more into question.
[+] [-] datashow|6 years ago|reply
For example, The Selfish Gene and Advanced Algebra are two very different books, but they can be both categorized as nonfiction.
Maybe you are right with the conclusion, i.e. CNN or TYT should not be both categorized as “Partisan Left,” but you need a better reason.
[+] [-] psandersen|6 years ago|reply
This illustrates the danger of ranking everything relatively... if enough people watch infowars like content then every university will be marked as radical left wing etc. There needs to be some way of anchoring the spectrums in actual facts or we're firmly in the 'post truth' era.
[+] [-] Thorentis|6 years ago|reply
EDIT: Downvotes? Give me a counter example that shows how TYT or CNN is not left-leaning.
[+] [-] hooande|6 years ago|reply
Their idea is that people get "radicalized" by being recommended extremist content after watching mainstream news videos. ie, fox news -> 911 conspiracy videos. They demonstrate that this doesn't happen often.
But the real problem wirh youtube is intracategory recommendations. If someone watches one conspiracy video, then their recommendations become all conspiracy videos which inevitably leads to them consuming more and more far right content.
This seems like good science and well conducted research. I just don't know if they had the correct view of the problem
[+] [-] corey_moncure|6 years ago|reply
Also it's kind of a weird take that extremism == far right. It seems like far left would be included, again as a matter of definition.
[+] [-] jansan|6 years ago|reply
Just in case you didn't know, conspiracy theories aren't an exclusively right wing thing.
[+] [-] trekrich|6 years ago|reply
[deleted]
[+] [-] Mountain_Skies|6 years ago|reply
[+] [-] wilde|6 years ago|reply
I mean, that’s how advertising works and most of these businesses are ads funded.
[+] [-] pjc50|6 years ago|reply
Vaccines work; the safety risks are minimal; there is no reliable evidence that they cause autism and the primary proponent of the theory was eventually struck off for malpractice. Getting this wrong will cause the unnecessary suffering and death of small children.
The earth is spherical, and this has been known since ancient times. Some things are not a matter of opinion.
[+] [-] bjourne|6 years ago|reply
* https://news.ycombinator.com/item?id=21793498 * https://news.ycombinator.com/item?id=20475792
[+] [-] zzzcpan|6 years ago|reply
[+] [-] motohagiography|6 years ago|reply
I think Youtube shows you an analogous bigger picture that makes the dominant media narratives seem fabricated and dishonest when viewed together, even if individually most of it is sincere (if dumb) reporting.
The discourse around radicalization has also been abused by editors and academics with agendas to de-normalize formerly moderate views, and who treat the interests of regular people as beneath discourse. So it's hard to take any discussions of radicalization seriously, even ones questioning whether it's even a thing, as you still have to acknowledge the nonsense it has been freighted with first.
[+] [-] TTPrograms|6 years ago|reply
[+] [-] shadowgovt|6 years ago|reply
(For example, another---and if I understand correctly, the main---reason compartmentalization is done is so a single compromised agent can't give "the whole farm" to an enemy intelligence organization. It's not about protecting agents from 'the real story,' but from making sure the enemy doesn't have a picture of the agency's understanding of 'the real story').
[+] [-] chrisco255|6 years ago|reply
[+] [-] cirno|6 years ago|reply
I know of many marginalized people targeted by online hate groups that have been smeared with proven-false allegations and doxxed online, and these stalking websites are consistently ranked as the top results in Google search and Google Images search results for their names.
Google's AI is clearly trained to recognize and promote controversy, because it is human nature for controversy to drive engagement.
[+] [-] taneq|6 years ago|reply
[+] [-] Kiro|6 years ago|reply
> To the contrary, these data suggest that YouTubes recommendation algorithm actively discourages viewers from visiting radicalizing or extremist content.
[+] [-] r721|6 years ago|reply
[+] [-] SpicyLemonZest|6 years ago|reply
> However, it seems that the company will have to decide first if the platform is meant for independent YouTubers or if it is just another outlet for mainstream media.
Come on, authors, save it for the blog post.
[+] [-] Thorentis|6 years ago|reply
So ... YouTube is actually politically biased? This would be an interesting study to continue. You would need to normalise for view counts etc. since I suspect there is a bias just in terms of numbers. The biggest channels are more progressive, left leaning, so YouTube could appear left-biased simply by recommending most-viewed videos.
[+] [-] twic|6 years ago|reply
https://www.wired.com/story/not-youtubes-algorithm-radicaliz...
[+] [-] irthomasthomas|6 years ago|reply
Ok so that's the why, now what's this simple test? Just search youtube for CERN and note the mix of results. I found that most of the results could be categorised as either pro-science or anti-science/conspiracy. In an ideal world I would expect to see the official CERN channel at the top, followed by sciency videos about CERN, and finally the fringe conspiracy videos about wormholes to other dimensions and such. It's fun to use tor to see how the results vary by country. The last time I did this was 2017, and I don't have the data to hand, but it was roughly 70-80% of results in the first few pages where conspiracy related.
Here's a screenshot of my result in 2017 when I'm logged in. https://user-images.githubusercontent.com/28928495/71557922-...
I just had a quick look now and my first impression is that results have certainly changed and improved, but it's a mixed bag. I will find my old data and update it soon.
[+] [-] XPKBandMaidCzun|6 years ago|reply
How is this different from couple's therapy, just at long distances w/ groups?
None of them want to have a picnic and cooperate with each other. And who could blame them? They both fail to recognize each others hardships, yet "get" the human condition better than the other so much better. They begin by insinuating there's another group that has a character flaw, and is so angry. Yet - all the while, they're angry and accusing (https://en.wikipedia.org/wiki/Projective_identification)
I have a hypothesis: It's all acting out, cathartic, to blow off steam. They're not at the soup kitchen or volunteering. They have a dysfunctional coping mechanism stemming from earlier traumas, and it's more profitable to captivate lonely, bored people by stir up people's anxiety existentially than help them find common ground.
Because if people realized the common ground they shared and cooperated, people would start to pass laws and regulations to make healthcare, employment, housing, education for more fair for legal persons. The whole concept of political sides is a sham: They are legal persons and https://en.wikipedia.org/wiki/Maslow%27s_hierarchy_of_needs.
If common sense stuff isn't being fixed and people are bikeshedding: the fix is in, people in suits are giving each other high fives and laughing at you. You're being suckered into squandering your political rights (you voted, or can!) to take worthless, symbolic digs at people rather than get what you need, better laws for the practical issues everyone shares. Hint: they tend to be boring.
[+] [-] nothingtos33|6 years ago|reply
Please watch it and tell me what you think of it.
[+] [-] minimaxir|6 years ago|reply
> My new article explains in detail. It takes aim at the NYT (in particular, @kevinroose) who have been on myth-filled crusade vs social media. We should start questioning the authoritative status of outlets that have soiled themselves with agendas.
From the linked Medium article:
> These events, along with the promotion of the now-debunked YouTube “rabbit hole” theory, reveal what many suspect — that old media titans, presenting themselves as non-partisan and authoritative, are in fact trapped in echo chambers of their own creation, and are no more incentivized to report the truth than YouTube grifters.
The paper itself makes a fundamental flaw of using logged out recommendations and attempts to disprove 2018 algos even after substantial changes since, which invalidates the research entirely.
[+] [-] fastball|6 years ago|reply
Sure, this research doesn't use 2018 algos, because it's 2019. I guess the NYT and others should've done the research in the first place then. It certainly doesn't invalidate the research - it shows that the claims made by others are probably not accurate in 2019 on YT.
[+] [-] tmpz22|6 years ago|reply
Perhaps freshly imaged computers in random geographical locations periodically running selenium or other browser automation software, recording which recommendations are made for various viewing preferences?
The problem is A) the study could be invalidated by basic bot prevention (from identifying Selenium, identifying fresh accounts, etc) and B) Youtube actively preventing such research with "new" accounts and C) lack of sufficient scale could skew the results.
Ideally Youtube would partner with a transparent auditor and have internal teams work with the auditors, but there's no way Google agrees to that because it could potentially unearth some very bad practices by the Youtube team (if they do exist...).
[+] [-] elfexec|6 years ago|reply
Just like the previous "study" which claimed everybody not part of the corporate media were alt-right.
These types of "research" are funded and run solely for an agenda. Primarily to push social media platforms to direct traffic to corporate media.
Also, radicalized is such an interesting word. If the saudis ran the exact same study with their definition of "radicals", I wonder how that would turn out? What if the chinese or the russians? Are their radicals the same as ours? Also, are the liberal elites' definition of radical the same as conservatives or the general population at large?
At the end of the day, it's all about "authorative" sources - an orwellian and creepy word if there ever was one.
[+] [-] EETruth|6 years ago|reply
[deleted]
[+] [-] throwaway294072|6 years ago|reply
You can test easily with a dedicated account.
https://www.thedailybeast.com/how-youtube-pulled-these-men-d...
https://www.theguardian.com/media/2018/sep/18/report-youtube...
https://www.cjr.org/the_media_today/youtube-conspiracy-radic...
https://www.youtube.com/watch?v=2Nrz4-FZx6k
https://www.youtube.com/watch?v=P55t6eryY3g
[+] [-] throwaway294072|6 years ago|reply
[deleted]