top | item 17974855

(no title)

jiojfdsal3 | 7 years ago

"(00:54:33) An employee asks what Google is going to do about “misinformation” and “fake news” shared by “low-information voters.” Pichai responds by stating that “investments in machine learning and AI” are a “big opportunity” to fix the problem."

Anyone find this disturbing? They're trying to use AI to manipulate what users 'should' see?

discuss

order

gizmo686|7 years ago

That is literally what Google was founded on. Their core technology behind their initial success was pagerank, an algorithm whose sole purpose is to manipulate what people see. They have stayed dominant through a combination of market forces, and keeping their algorithm near (or, IMO at) the top of the market for generalized search [0], and have leveraged this competency in other markets (video, ads, etc).

Beyond just Google, this is an inevitable result of having any which resembles the internet we know today. The alternative is to go back to human gatekeepers. While it is arguable if human gatekeepers are better from a consumption standpoint, it is clear that they are worse from a production standpoint, as it massivly increases the barriers to publication.

[0] In actuality, I suspect that Google's "algorithm" involves a fair bit of "cheating" by having humans nudge the results. Political bias aside, I think not doing this would leave them too open to attack from other players in the market who do.

roenxi|7 years ago

I value political neutrality in the workplace, so I have a lot of in-principle issues with what is being shown here right from the get-go. I didn't wind the video back to find the exact quote, but we now have publicised evidence of senior leadership at Google who stood up and said 'obviously our values are not the same as a big chunk of Americans'. Clearly a lot of them are specific Hillary supporters, a candidate so unelectable she lost to Trump.

There is a practical difference between pagerank, which is a transparent algorithm, and a non-transparent magic algorithm that is controlled by a group who are clearly not engaged with the idea of corporate political neutrality. Taking the views expressed here as a starting point, logically why shouldn't they try to tilt the election result using their power?

EDIT I'm just going to add this in because it just doesn't sit well. It shouldn't acceptable for leadership in a workplace to stand up and express pain and dismay at the outcome of a democratic process.

liftbigweights|7 years ago

> That is literally what Google was founded on.

No it wasn't. Google was founded on being agnostic and giving you the most objective and representative view of the internet. If it didn't, it wouldn't have survived. Pagerank was not politically driven. It didn't care whether the links went to webpages that catered to larry page or brin's political ideology.

> [0] In actuality, I suspect that Google's "algorithm" involves a fair bit of "cheating" by having humans nudge the results.

We know it is. We know that they changed google news to appease large media companies. We know google seearch has been changed to appease large media companies. We know youtubee has been changed to appease large media companies.

The "do not evil" google of the 2000s died a long and slow death. The google of the 2010s has been quite biased. No longer is google objectively representing the internet as it is. It's representing the internet as page and brin wants it to be.

I don't know why people are celebrating it just because they are anti-trump. We know that there are tons of saudi, chinese and israeli money and influence in silicon valley. Do we really want a monopoly like google to be politically driven? Do we really want search and youtube to be politically driven?

Just because google is being manipulated in your favor today doesn't mean it is going to be manipulated in your favor tomorrow. It just surprises me so few people here seem to understand that.

wyldfire|7 years ago

> That is literally what Google was founded on.

Hooray! Pre-Google search sucked. It was really bad.

> The alternative is to go back to human gatekeepers.

I don't know what this means. Humans have never audited/controlled what gets indexed by crawlers. It's always automata unleashed on the data. It would be terribly unproductive to prune or tune the index with humans. However, using humans as a part of a feedback loop to tune an algorithm is a good idea [presumably all search engines do something like this].

dragonwriter|7 years ago

It's only disturbing if you reject the premise of the question in advance—that the concern is addressing inaccurate factual information. Connecting users with good information is literally the function on which Google wad founded, and applying AI to that mission has been a Google vision from very early on.

It would be weird and worrying if Google selectively choose not to do that with information that might have political salience.

Digory|7 years ago

Right, but google’s premise is/was that it is feeding you the unfiltered view of the crowd. If not the world, at least of American computer users. That’s some signal about invariably qualitative judgments on facts.

“Hillary Emails” should give you results based on global pagerank, whether or not you agree with the subjective assessments about the importance of Hillary’s emails. Silently substituting the pagerank of New York or the subjective judgment of a room of Googlers undercuts the signal. If you give me the option to filter by DC or LA pagerank, that’s fine. But there should always be some free, unfiltered pagerank available.

ehsankia|7 years ago

Right, anyone who finds this offensive is implicitly agreeing that removing unfactual results would skew the results to be politically biased.

supernovae|7 years ago

People need to understand that fake news is an algorithm used against people - whether or not there is tech or AI pushing it. Those who push fake news know how it programs humans.

Having humans interpret what is real/fake is hard to do without bias, cheating, manipulation, favoritism and so much more.

Google knows social signal processing, it could implement ranking/scoring based on sites like PolitiFact and how many major news outlets are covering it, how the bios of the writers/contributors are - what the social graph of their reach is yaddy yaddy yadda.

We have signals for so much - that even humans use to sort/score what is real/what isn't. An AI would be able to do much of this based on social graphs and understanding of sources, targets, links, attributions and so much more.

For me, the scary thing isn't using AI to filter known lies, the scary thing is that we have AI that can do this but don't do anything because we have let the value of fake news be worth more than the value of standing for truth.

peterwwillis|7 years ago

What media organization on this planet doesn't make decisions about what users should and shouldn't see? Breitbart does it, The New York Times does it, your local news channels do it.

Google as a news broker is literally just showing you what Google wants to show you. I don't know why you'd think they wouldn't have a bias, or filter their output based on it. All media does.

adventured|7 years ago

None of those is a fraction as powerful as Google, which is an $800 billion goliath with multiple hyper entrenched monopolies. Those monopolies entirely change the equation and expectations.

Search, Android, YouTube. All three of those are either monopolies or close to it. YouTube by itself is worth a solid 20 times what the NY Times is.

Breitbart is maybe worth $100m, a top 100 US Web site with a couple million readers.

The NY Times is a $3.6b business, with maybe 10x the daily readership of Breitbart (and a much more lucrative readership of course).

If those two want to duke it out with each other, fine. Just like with Fox or MSNBC. None of them possess monopoly positions, much less in extraordinarily large, critical information pathways, as with search, YouTube and Android.

If Microsoft had acted in 1999-2000, during its peak Windows monopoly power, to use the desktop + IE in some manner to try to throw the Bush v Gore election in favor of Bush, the Democrats would have more than lost their minds over that. It would have been considered an extraordinary abuse of monopoly power by Microsoft. Google is going to soon find out they've unleashed a political genie that is never going back into the bottle.

randyrand|7 years ago

It does bother me that they never acknowledge the more knowledgeable and informed trump voters.

To think they don’t exist is almost as ignorant as the people they’re referring to.

tkmo|7 years ago

Interestingly, Credit Rating is one of the strongest predictors of voting Republican

happytoexplain|7 years ago

I see this complaint a lot - "hey, we/they are not like that", in response to a discussion about some perceived problem originating from within a broad group of people (I see it happen on all sides, but maybe more often from conservatives in response to liberal "elitism"), and often it is legitimate, but a lot of the time it sadly is just used as an off-topic attack on the conversation. E.g. when somebody tries to discuss the "bad apples" in the police, instead of actually discussing them, the conversation often gets shifted by indignant supporters of the larger group, in this case all police, who feel attacked by the conversation. I often wonder how to avoid this - obviously the person bringing up the controversy must not have an aggressive, accusational tone, but even then these kinds of reactions seem omnipresent in any forum. I wonder how many of the people reacting this way are just reacting emotionally, and how many truly believe the premise of the topic is actually utterly false ("there are NO bad apples in the police"), or feel that it might be true, but that the framing is somehow always an irrational attack on the entire group they are a member of.

shams93|7 years ago

The solution is actually free college education for everyone who wants it, even if not everyone will be able to use it on their job the only solution is massively easier access to quality education that teaches you how to think, not what to think. The lack of critical thinking skills is a real problem but its largely by designs, some in DC think having an economic draft is a good thing and an educated population is not welcomed.

tropo|7 years ago

Nearly every college is all about teaching you what to think, not how to think. They like to claim otherwise... which is another thing they want you to think.

The non-STEM courses are particularly a cesspool of telling you what to think. They grade you on it. Politically incorrect opinions, no matter how well argued, will badly damage your grade or worse.

mc32|7 years ago

Will those intructors teach from both sides of the political spectrum and give both major philosophies similar time?

paulddraper|7 years ago

Wait...how many years of free education do you need to learn "how to think"?

smsm42|7 years ago

Yes. They think they can figure out what we should see (and how we should vote, obviously) and their mission, as good people, is helping us to get there. Kicking and screaming, if needed, it's all for our own good at the end. Yes, it is concerning. That's why there should be many search engines, and that's why everybody should support projects like DuckDuckGo. Having one corporation - with however noblest intentions they see themselves - solely controlling information input of billions is not healthy, no matter what your politics is.

jf|7 years ago

Using AI to manipulate what you see is already happening - especially if you browse the web without an adblocker.

jiojfdsal3|7 years ago

Yeah, but this directly has ramifications for our democracy, doesn't it? It looks they're actively trying to prevent certain content from being seen while exposing the content Google executives feel the masses 'should' see. This doesn't seem very neutral to me. The leaked video is quite disturbing.

ocdtrekkie|7 years ago

The core of Google's business model is using AI to manipulate what people see. (Hint: Mostly ads.) Whether you want to argue it's politically-biased is a separate discussion.

Spam filtering is arguably using AI to manipulate what the user sees. In that case, it's less spam. A lot of AI is focused on finding what's the most interesting/valuable to the user, removing "bad" data, etc.

Members of most political parties, arguably, would like fake news and misinformation to be curated away, it's just that those parties often disagree with what news is "fake".

cjhanks|7 years ago

It is very disturbing. The notion that artificial intelligence can intercept information from individuals, and properly classify its accuracy seems dangerous.

It works with much of internet content, because formal communication has proper structure. But that is beside the point.

The real question is - who is responsible for classifying the training data? And what makes them qualified?

droopybuns|7 years ago

Deeply. Between gmail’s the auto-reply that tries to anticipate how I would respond to an email and the material in this video, I motivated to get off of gmail and all google products.

I can appreciate that they were disappointed in the election results. I am shocked to see leaders of a Fortune 100 company responding this way, in that forum. What are they thinking? That there is no legitimate reason for voting against Hillary? That all who have a different opinion than them are evil?

I have been really skeptical about James Damore- and I still think he’s a tragic clown, but now I am rethinking a few things:

Google appears to be a liberal monoculture that cannot understand legitimate alternative viewpoints.

Google’s leadership has no instinct to curb potentially controversial opinions in front of their own ranks. Clearly, no libertarians or conservatives work at google. Who would tolerate this kind of intolerance from their employer?

grumdan|7 years ago

Isn't this only disturbing if one rejects there being an objective distinction between misinformation and information? Sure, there are gray areas, but there are many actors exposing people to indisputably factually false claims, and "false" in this sentence is quite often not a political concept or subjective. The pope didn't endorse Trump. Trump's inauguration crowd was not the largest in history, etc.

Google may be involved in a lot of manipulative things, but trying to distinguish between blatantly incorrect sources of information and the rest does not seem like a shady political motive to me, and there overall goal seems worthwhile to me. Encyclopedias and scientific journals also manipulate what people are seeing and I don't think filtering mechanisms in general are problematic; it depends on how the information is selected, and if this is done through a good process, it can be very beneficial.

on_and_off|7 years ago

serious question : How else are they supposed to treat information at this scale ?

vernie|7 years ago

Damn those scare quotes are really scary.

dmead|7 years ago

Nope. better them than putin or steve bannon, or anyone who works at breitbart

imustbeevil|7 years ago

I've used the internet long enough to not want to spend another second reading a lie someone wrote a news article about.

jiojfdsal3|7 years ago

I didn't have to read the article. I just watched the leaked video with my own eyes in its entirety.