(no title)
pawn
|
9 years ago
I think this has huge potential for abuse. Let's say politifact or snopes or both happen to be biased. Let's say they both lean left or both lean right. Now an entire side of the aisle will always be presented by Google as false. I know that's how most people perceive it anyway, but how's it going to look for Google when they're taking a side? Also, I have to wonder whether this will flag things as false until one of those other sites confirms it, or does it default to neutral?
lallysingh|9 years ago
If one/both of them starts getting power-mad with their influence, they can get booted off the list, and replaced, or have others come in next to them.
If two of them say "This is true", and a third says "This is false", then that third one, if it can present good data for it's POV, can quickly become more trusted.
rjeli|9 years ago
http://www.politifact.com/truth-o-meter/statements/2015/jul/...
http://www.politifact.com/virginia/statements/2016/jun/20/do...
colordrops|9 years ago
hueving|9 years ago
For example, "if you like your doctor, you can keep him/her" is true if you include paying for him/her yourself but false if you limit it to what insurance covers.
This is where sites like politifact really get wiggle room to change the narrative. Just interpret the meaning in the most favorable strictness for who you want to call a liar.
elevensies|9 years ago
mtgx|9 years ago
What I'm more worried about is what happens when CNN, NBC, and WashPost agree on a story narrative that is is actually false.
I mean, how many media entities said Iraq had WMD before the war? Most of them? In that scenario, will Google be able to figure out who is actually telling the truth? And would "reputable" sites such as those have their articles flagged for falseness if Google does come to a different conclusion?
Or will Google only target lesser known sites for flagging because "nobody cares if they disappear from the internet or are flagged the wrong way anyway" (which is actually the pro-censorship argument).
Let's not forget China has used the "We want to stop false rumors" argument since the creation of the Great Firewall. And surprise, surprise - it wasn't used just to stop "false rumors".
enknamel|9 years ago
jdoliner|9 years ago
http://www.politifact.com/truth-o-meter/statements/2017/feb/...
They state in the article that "The numbers check out" but somehow still conclude that the statement is mostly false.
pawn|9 years ago
Myth: Donald Trump said he loves women.
Verdict: Mostly false
Facts: He said he loves women. He's also said mean things about Rosie O'Donnell.
Myth: Bill Clinton said he loves Mexicans.
Verdict: Mostly True
Facts: He said he loves women. Some women are Mexicans
And yes, sometimes it's that absurdly blatant.
artursapek|9 years ago
ng12|9 years ago
Jach|9 years ago
Another fun one is if certain facts about overall dimensions of the data are reported but not about the individual dimensions themselves: https://en.wikipedia.org/wiki/Simpson%27s_paradox
Basically my point is even if we suppose these are objective facts, that's not enough to rule out bias determining which facts are used.
snowpanda|9 years ago
Um no..., and they've never been.
013a|9 years ago
> "There was a very large infrastructure bill that was approved during the Obama administration, a trillion dollars. Nobody ever saw anything being built." - President Trump
Politifact's response?
> "Mostly false" >> It "Underplays the law's actual achievements."
There are _so_ many ethical questions and conclusions you can make about this one article.
On Trump's side?
- The bill wasn't entirely about infrastructure, but he implies that the entire thing was. Wait... would you consider what he said an implication? Or a claim? What percentage of the bill's total funding, actions, and verbiage need to be about infrastructure before it becomes an "infrastructure bill"?
- "Nobody saw anything" --> Hyperbole, which means fact checking is incredibly difficult. Do we fact check on his hyperbolic aim (maybe "The bill didn't fund as many infrastructure projects as it should have") or do we fact check the statement?
But on Politifact's side...
- If we fact check the statement as-is, its not "mostly false"; it is false. At least one person saw an infrastructure project. The article even says so.
- Most reasonable people would interpret his statement as hyperbole. Politifact clearly does in giving him a "mostly false" rating. If we fact check the hyperbole, how do you decide what the actual intention of his statement is?
The point is: When we are fact checking every sentence politicians say, even if they're very short summaries of a much deeper issue, there is no such thing as objective fact. Period.
In this example, Trump cannot reasonably be expected to fully talk about every part of the bill Obama passed. Its a long bill. He summarizes his position on it. This is what comes out. There is hyperbole. There is rounding error. There is factual error. Its imperfect. Humans are imperfect.
The flipside of that is also true: Politicians summarize and use hyperbole, but fact-checkers do the same thing. That politifact article takes thousands of words of research, argumentative facets, points against, points for, hundreds upon hundreds of hours of work... and just says "Mostly false." Does that really explain the situation? Really?
Most readers will not read past that single line: "Mostly false". Yup, Trump lied again, bad Trump! Politifact is responsible enough in providing the rest of the data. And, really, we can place some of the blame on lazy readers who don't read the rest of the article. Not politifact's fault.
Yet, think of the corollary between a lazy reader and a machine. A machine cannot read the rest of that article. It cannot understand the intricacies of this debate; not without much stronger AI than we have today. The algorithm sees "mostly false" and thinks "ok lets factor that into the score."
And that's assuming the algorithm even considers this fact-checking website. What makes a fact checking website reliable? Who gets to decide which websites we trust? Is it that they are right 100% of the time? I've already outlined how much room there is for bias just in this single instance; how can we even truly decide what "right" means when the statements we are fact checking aren't scientific papers, with every data and variable accounted for?
This whole thing is dangerous. I hate it. I hate Google for doing it. I hate Facebook for doing it. They have no respect for how much power they wield. They have no respect for the bubble their decision-makers live in. They are children who, almost accidentally, stumbled upon the nuclear football. They flip switches and dials with no responsibility, and if Australia gets nuked up in the process, they'll be there to sell you iodine.
[1] http://www.politifact.com/truth-o-meter/statements/2017/apr/...
Chaebixi|9 years ago
How will the algorithm know a source is getting power mad?
ed_balls|9 years ago
kbenson|9 years ago
They outline how to get your fact checking into their system. Presumably if your fact checking is relevant and correct, you can be included in the results. At the point where an organization is trying to be included, but is being excluded or marginalized because of opinion or politics and not because of the validity of their assessments, then we can have a discussion about abuse. Until then, let's use what we have.
> Also, I have to wonder whether this will flag things as false until one of those other sites confirms it, or does it default to neutral?
It's extra information added to things that are fact checked from the knowledge graph. I don't think they'll be tagging everything, since the vast majority of searches will have no associated info. Stub fact check data marked as "pending" could possibly be spidered though, and I suspect items associated with that might come up with an "Unknown" or "Pending" value in the fact check expansion area.