top | item 19282109

Fact-checking chatbot “Meiyu” shuts down dubious family texts

60 points| petethomas | 7 years ago |wsj.com | reply

23 comments

order
[+] roywiggins|7 years ago|reply
"Clyde Lin was scolded by his uncle after the 39-year-old pilot brought Meiyu into the family chat group late last year. “Who is this?” Mr. Lin said his uncle demanded. The family group contained too much personal information to give access to a bot, Mr. Lin said his uncle said."

His uncle is 100% correct. Adding a bot to a conversation thread is very likely literally connecting a hose up to your family discussions and sending all of them to a random startup. You wouldn't add in a random human fact-checker, and even with the illusion that the only entity reading your texts is a robot, there's no guarantee that will always be the case.

"Ms. Hsu, the developer, said Meiyu isn’t designed to collect personal data on users." 1) objectives change, 2) ownership changes, 3) it very likely collects personal data whether it's meaning to or not. Brilliant. I literally could not think of a better target for a state actor than something like this, which could give pretty deep insight into what people are thinking about you when they talk privately.

[+] deogeo|7 years ago|reply
> isn’t designed to collect personal data on users

Is that the same as "doesn't"? And does that apply to all the other components that get access to the chat logs it sends back home?

[+] roywiggins|7 years ago|reply
> In a nation with long-held Chinese traditions of etiquette, however, Meiyu is proving to be socially inept. Online chat groups often comprise several dozen extended family members. Openly disputing facts with elder relatives is considered bad behavior.

To heck with "Chinese traditions", "openly disputing facts with elder relatives" does not go down great in the West either, unless you've got a particularly feisty uncle who likes political debates.

Literally my first thought on seeing the headline was "wow, sounds like a great way to get disowned" and that was me projecting my American context onto it.

Have these people never heard of the phrase "pick your battles"? A bot has no tact, and will pick fights with fairly trivial nonsense and deeply problematic lies with the same assiduousness. Lots of things are not, strictly speaking, true. Not all false beliefs are damaging in the same way. Being technically correct is the worst kind of correct.

[+] technofiend|7 years ago|reply
>Openly disputing facts with elder relatives is considered bad behavior.

I was envisioning the opposite - like posting anything critical of the government's handling of Tiananmen Square would have your little uncle spybot posting "corrections" to the chat explaining the government sanctioned view.

[+] gatesphere|7 years ago|reply
Huh, you and I must live in radically different Americas.
[+] forgingahead|7 years ago|reply
Current top comment on the WSJ:

===

The examples given are "contrarian bot" rather than "fact checking bot."

e.g., "The doctor quoted in this post does not have proper qualifications." Fine, but that doesn't mean he's wrong.

e.g., “The internet has a lot of information on drinking water. Doctors say not all of it is credible.” No kidding. But this doesn't mean "stay hydrated" was bad advice.

I'm having difficulty finding the value of something like this is. I suppose if you're too timid to push back against other people it might be nice to have a bot to do it. Like the example where it's culturally inappropriate to push back on your elders, so you introduce a piece of software to contradict everything they say.

Fine, maybe that feels good in a strange way, but it seems dysfunctional, passive aggressive, and unproductive. The bot isn't giving advice, it's just saying "that's YOUR opinion" to everyone else's advice

===

[+] Bartweiss|7 years ago|reply
This reads like somebody based ELIZA on Monty Python's argument clinic instead of a therapist.

Alert fatigue kicks in easily under the best circumstances, and "an annoying bot cluttering a group text" is hardly the best of circumstances. If the bot limited itself to high-confidence assertions that a claim is factually false, I'd be interested. (I wouldn't use it, the social and privacy aspects guarantee that, but I'd be interested in the tech.)

But "not all information about water is credible" isn't fact-checking; it's a tool you could duplicate with keyword recognition and a list of random stock refutations. Maybe I'm overzealous, but I'd consider removing an actual human from a group-chat if they consistently made that sort of post.

[+] roywiggins|7 years ago|reply
I've known people with the same conversational style as this bot, and they are absolute hell to talk to unless you decide to gamify it and play with finding the least objectionable thing they could find fault with.
[+] shesee|7 years ago|reply
Hi I'm author -- comments on HackerNews are more trenchant especially for tech aspect.

1. both auntie "Meiyu" and "CoFacts" are open source projects, we share the source code on GitHub.

2. can't disagree the search results might be manipulated, but for now, there's just limited volunteers to "clarify" most of rumors on CoFacts.

3. Logs: that's a certain point. Since I deployed this project on Heroku (which remains limited logs) I still have no time to format the logs. this comment about log storage makes sense to me, I won't rebut this point of view is nothing to me, it matters,will update it for sure.

I think wsj actually skip most of background of why we do this. Since not only Taiwan's election has been effected by deliberatedly manipulated rumors, endless medical misinformation. furthermore, most of medias own strong biases in Taiwan, everyday these's (even quite rough) fake news, bombing everyone's brain day after day until you give up to clarify anything.

And these medias own resources, centralized to publish fake news, to fight with them in fact is quite hopeless, unless we try to decentralized our information, allow everyone got the chance to verify these and speak it loud.

So Meiyu, imo at least it clarifies rumors / misinformation for you, and repeat it tirelessly. It's annoyed for everyone I can totally understand (That's another reason I name it as Auntie "Meiyu" by very common senior generation name, I'd like This tiny service becomes a bit heartwarming and friendly), but we must go for it. nor only someone's life / health would be misled by misinformation, but even our island would be ruined by elaborate politic rumors as well.

[+] dwighttk|7 years ago|reply
To the person using this to avoid confrontation: adding a bot that responds ‘false’ is at least as confrontational as you replying that same way.
[+] jandrese|7 years ago|reply
It's a joke character trait from The Office, in bot form.

Or https://i.kym-cdn.com/photos/images/newsfeed/001/191/035/135... in bot form.

I get the noble cause to combat false information right at the roots, especially when people are too polite to do it themselves, but making a bot to be rude and obnoxious for you is still rude and obnoxious.

[+] Bartweiss|7 years ago|reply
Honestly, it seems worse.

I suppose the idea is that in a largish group chat, refuting a post is personal while adding the bot is a general change. But most of the examples look blatantly targeted, and even absent that it's an implicit declaration of "you people are full of it and need correction".

Contradicting people can at least be done tactfully, picking battles and making sure you're definitely right before starting them. Adding a "nuh-uh" bot strikes me as emphasizing insult instead of accuracy; it's on the same level as rolling your eyes at someone's comments but refusing to actually disagree with them.

[+] zuypaweu|7 years ago|reply
Whats even more brilliant is that someone could potentially influence massive amounts of people with this. If you hear something on the TV you're be pretty skeptical, but when it comes from someone you love then it gets interesting...

very interesting..

They could make people believe that WW2 never happened or some other garbage. It all depends on who'll be verifying whats true and whats not.

[+] civilian|7 years ago|reply
I think that people suggesting their own variants on cold-treatments is a way for them to show they care.

"Drink a lot of water!" "Put on socks before you go to bed!" "Netty pot!" "Drink lots of tea!" All of these are kind of common knowledge. I think the act of encouraging people to rest up and take care of themselves is just a generic way to show you hope they get better. And maybe, if they really are so sick that they aren't thinking clearly, it'll then serve as a reminder to get some tea.

[+] qwerty456127|7 years ago|reply
Cool! I hope it is going to became available in more languages! It would be nice, however, if you could customize its manner and its level of skepticism (i.e. I'd prefer it not to claim anything false unless it's scientifically proven false and it can link to the proof, e.g. in my opinion "the doctor's qualification is questionable" is not a sufficient reason to dismiss an idea).
[+] rajacombinator|7 years ago|reply
Who needs thoughtcrime police when you can turn family against each other!
[+] cjg|7 years ago|reply

[deleted]