top | item 34698040

(no title)

michaericalribo | 3 years ago

This is a great illustration of the risks of LLMs. As a user, if I am asking this question to a search engine, I definitely do not expect to need to fact-check the results. That's the whole reason to use the search engine in the first place!

We're about to enter a dark ages of crappy AI products that are touted as game changing, outcompeting each other to be the best chatbot that can compose haiku about how grapes turn into raisins.

discuss

order

mort96|3 years ago

We fact check search engine results all the time. But most of the time, such fact checking is in the form of looking at a result, considering whether it seems like a credible source, seeing if multiple credible-seeming results have the same answer, etc.

Getting a completely untrustworthy, unsourced response seems worse than useless. Google has been going this way for a while, with its instant answers or whatever, but at least those try to cite a search result and you can read the surrounding context which Google got the result from.

add-sub-mul-div|3 years ago

We're slow-motion singing on to a future with a fundamental shift to receiving information in a completely opaque manner.

A few sources will control the information we get in a much more direct and extreme way than now, that conscious skepticism will no longer be able to defend. Whatever handwaved promises we get now will be gone ten years from now.

If there wasn't such a gee-whiz coolness factor about conversational search results distracting us, we'd never tolerate that in principle.

jug|3 years ago

For the record, the “New Bing” AI results will not be unsourced but with key facts in sentences tagged in Wikipedia style, pointing towards the source URL. Finally, below the reply there will be a domain summary for an overview but where each domain name is clickable to get to the respective articles on said domains.

In this case, Bing AI will operate very differently from ChatGPT.

acdha|3 years ago

Instant answers seem like a cautionary example since Google has gotten a fair amount of flack over the cases where it inaccurately summarized content. I think these services are going to be very interesting to study whether the average person thinks they're more authoritative because they're branded by a huge corporation and whether that'll decline over time as people realize the limitations.

keammo1|3 years ago

I would definitely fact check search results as much as AI, especially the info snippets that appear at the top of Google's SERPs.

For example, until a few months the results for "pork cooked temperature" and "chicken cooked temperature" were returning incorrect values, boldly declaring too low of a temperature right at the top of the page (I know these numbers can vary based on how long the meat is at a certain temperature, but I verified Google was parsing the info incorrectly from the page it was referencing, pulling the temperature for the wrong kinds of meat). This was potentially dangerous incorrect info IMO

mianos|3 years ago

Snippets have become so useless I use a plugin to remove them.

What is ridiculous is, when, say, Stack Overflow has a good answer, it is a few lines down or on the next page in the search results, but some page-mill SEO site is in snippets up top with a completely wrong or naively pathetic partially correct answer. It is so annoying it has lowered my opinion on Google a lot in recent times.

anyonecancode|3 years ago

> I would definitely fact check search results as much as AI, especially the info snippets that appear at the top of Google's SERPs.

Yes, so would I. And I also double check things like Google Maps -- a tool I find very helpful but don't trust blindly. But... do most people think to take a close look at Google Maps to make sure it makes sense, and trust their own judgement if they disagree with the map? Will most people fact check confident LLM outputs?

nicbou|3 years ago

The content I write is often half-assedly plagiarised by copywriters or incorrectly interpreted by lazy journalists. This is just an automated version of it. They can use my hard work for their own profit at an unprecedented pace, while still remaining factually incorrect.

brookst|3 years ago

Disagree. I think this is akin to Netflix’s Chaos Monkey, which relied on the insight that it is impossible to build infallible systems, so you design failure and recovery in.

Existing Google searches are polluted with false information, and Google’s has been losing that battle. It’s probably not even possible to win.

So rather than saying search engines should always be perfectly accurate and errors are catastrophic, we should accept that search engines are, and have always been imperfect, and need to give us enough info to validate facts for queries important enough to merit it.

fortyseven|3 years ago

Ever since Google started adding those quick answer boxes at the top of search results I've had the double check everything they say. They're quite often incorrect. I mean I know that, but this grandma? They've all been conditioned to trust Google.

kleiba|3 years ago

Frankly, I quite often fact check results I get from simple google queries.

But I do agree that adding another level of fake news generation is a solution in desperate need of a problem.

CamperBob2|3 years ago

As a user, if I am asking this question to a search engine, I definitely do not expect to need to fact-check the results.

And this stance seriously hasn't bitten you in your life or career to date?

Baeocystin|3 years ago

> I am asking this question to a search engine, I definitely do not expect to need to fact-check the results.

Genuine, honest question: How did you come to the belief that search engines are reliable sources of truth?

I completely agree that search engines provide a valuable service. But in my own work, I find them to very often point to inaccurate information, sometimes greatly so. I don't think this is terribly surprising, given Sturgeon's law, but still.

kelseyfrog|3 years ago

I can see how someone could extrapolate Google's goal of indexing knowledge(JTB) into being a reliable source of truth. It's simply a matter of taking them at their word on the J and T parts. The B is up to the user.

Google's branding frames itself as the expert in the novice-expert problem. The vast number of users implicitly take on the role of the novice by virtue of using the product. They've already self-identified as a novice which makes both parties complicit in the arrangement.

primax|3 years ago

When I ask ChatGPT a question, it explains it's reasoning and gives me concepts I can follow up with googling to learn more.

When I use Google for research, I get articles written for SEO to push products and often have to refine and refine and refine to get something useful, which I then can follow up by googling to learn more. With difficulty.

Honestly I don't know how much I'd use ChatGPT if I had the internet of 2016 and Google.

mda|3 years ago

Careful, It explains but both answer and explanation are sometimes completely hallucinated, it sometimes looks like a plausible answer, but actually it completely made up. And this happens way too often for me to take it seriously for now.

thinknubpad|3 years ago

>As a user, if I am asking this question to a search engine, I definitely do not expect to need to fact-check the results.

This is scary to read. You always need to fact-check the results, whether they come from a search engine, an AI, or a primary source!

cbsmith|3 years ago

I don't think it's a great illustration of the risks of LLMs.

Ad content invariably gets vetted by humans. The fact that it shows up in the ad demonstrates human failures more than failures of LLMs.

vicentwu|3 years ago

So i think the ability of the search engine to say "I don't know" is very important, and most of current chatgpt like models in the market don't have this feature.

CatWChainsaw|3 years ago

>We're about to enter a dark ages of crappy AI products

Fine. We need another good winter or ten before we decide we want to commit societal suicide via deepfake tsunami.

scarface74|3 years ago

You trust everything you find on search engines?

MuffinFlavored|3 years ago

Why can't an LLM fact check itself somehow/someway?