top | item 45093447

(no title)

AnEro | 6 months ago

I really hope this stays up, despite the politics involvement to a degree. I think this is a situation that is a perfect example of how AI hallucinations/lack of accuracy could significantly impact our lives going forward. A very nuanced and serious topic with lots of back and forth being distilled down to headlines by any source, it is a terrifying reality. Especially if we aren't able to communicate how these tools work to the public. (if they even will care to learn it) At least when humans did this they knew at some level at least they skimmed the information on the person/topic.

discuss

order

geerlingguy|6 months ago

I've had multiple people copy and paste AI conversations and results in GitHub issues, emails, etc., and there are I think a growing number of people who blindly trust the results of any of these models... including the 'results summary' posted at the top of Google search results.

Almost every summary I have read through contains at least one glaring mistake, but if it's something I know nothing about, I could see how easy it would be to just trust it, since 95% of it seems true/accurate.

Trust, but verify is all the more relevant today. Except I would discount the trust, even.

Aurornis|6 months ago

> I've had multiple people copy and paste AI conversations and results in GitHub issues, emails, etc.,

A growing number of Discords, open source projects, and other spaces where I participate now have explicit rules against copying and pasting ChatGPT content.

When there aren’t rules, many people are quick to discourage LLM copy and paste. “Please don’t do this”.

The LLM copy and paste wall of text that may or may not be accurate is extremely frustrating to everyone else. Some people think they’re being helpful by doing it, but it’s quickly becoming a social faux pas.

userbinator|6 months ago

"Trust, but verify" is an oxymoron. AI is not to be trusted for information.

add-sub-mul-div|6 months ago

We all think ourselves as understanding the tradeoffs of this tech and that we know how to use it responsibly. And we here may be right. But the typical person wants to do the least amount of effort and thinking possible. Our society will evolve to reflect this, it won't be great, and it will affect all of us no matter how personally responsible some of us remain.

fennecbutt|6 months ago

It's just mass stupidity really. Technology is just a lever for what already existed.

The same people blindly trusting AI nonsense are the same people who trusted nonsense from social media or talking heads on unreputable news channels.

Like, who actually reads the output of The Sun, etc? Those people do, have always done and will continue to do so. And they vote, yaaay democracy - if your voter base lives in a fantasy world of fake news and false science is democracy still sancrosact?

account42|6 months ago

> I've had multiple people copy and paste AI conversations and results in GitHub issues, emails, etc., and there are I think a growing number of people who blindly trust the results of any of these models... including the 'results summary' posted at the top of Google search results.

I like the term "echoborg" for these people. I hope it catches on.

freeopinion|6 months ago

prompt> use javascript to convert a unix timestamp to a date in 'YYYY-MM-DD' format using Temporal

answer> Temporal.Instant.fromEpochSeconds(timestamp).toPlainDate()

Trust but verify?

abustamam|6 months ago

Whenever I use AI in social settings to fact check or do research or get advice, I always trust but verify, but also disclaim it so that people know to trust but verify.

I think this is a good habit to get people into, even in casual conversations. Even if someone didn't directly get their info from AI and got it online, the content could have still been generated by AI. Like you said, the trust part of trust but verify is quickly dwindling.

haswell|6 months ago

One of the arguments used to justify the mass-ingestion of copyrighted content to build these models is that the resulting model is a transformative work, and thus fair use.

If this is indeed true, it seems like Google et al must be liable for output like this according to their own argument, i.e. if the work is transformative, they can’t claim someone else is liable.

These companies can’t have their cake and eat it too. It’ll be interesting to see how this plays out.

pjc50|6 months ago

> These companies can’t have their cake and eat it too

I think you're underestimating the effect of billions of dollars on the legal system, and the likely impact of the Have Your Cake And Eat It Act 2026.

account42|6 months ago

I mean that's what they have been getting already - the average joe had to deal with draconian copyright all this time but not that it's inconvenient for big tech they get to hand-wave it away. The social contract has already been broken.

And companies have always been able to get away with relatively minor fines for things that get individuals locked up until they rot.

Gigachad|6 months ago

Google should be held liable for this. They are the ones who published and hosted this. And should be accountable for every bit of libel they publish.

pjc50|6 months ago

American attitudes to free speech mean that only the most dramatic, damaging libels can be held accountable, years after the effect. I think the only one I can think of that got justice was Alex Jones libelling Sandy Hook victims.

(no easy answers: UK libel law errs in the other direction)

jaccola|6 months ago

This story will probably become big enough to drown out the fake video and the AI (which is presumably being fed top n search results) will automatically describe this fake video controversy instead...

reaperducer|6 months ago

I think this is a situation that is a perfect example of how AI hallucinations/lack of accuracy could significantly impact our lives going forward.

This has been a Google problem for decades.

I used to run a real estate forum. Someone once wrote a message along the lines of "Joe is a really great real estate agent, but Frank is a total scumbag. Stole all my money."

When people would Google Joe, my forum was the first result. And the snippet Google made from the content was "Joe... is a total scumbag. Stole all my money."

I found out about it when Joe lawyered up. That was a fun six months.

dieortin|6 months ago

Sorry but I don’t see how what you mention is a problem. A search engine is just surfacing the content you have in your site. That is very different from it making stuff up.

ants_everywhere|6 months ago

Has anyone independently confirmed the accuracy of his claim?

fsckboy|6 months ago

>a perfect example of how AI hallucinations/lack of accuracy could significantly impact our lives going forward.

how about stop forming judgments of people based on their stance on Israel/Hamas, and stop hanging around people who do, and you'll be fine. if somebody misstates your opinion, it won't matter.

probably you'll have to drop bluesky and parts of HN (like this political discussion that you urge be left up) but that's necessary because all legitimate opinions about Israel/Hamas are very misinformed/cherry picked, and AI is just flipping a coin which is just as good as an illegitimate opinion.

(if anybody would like to convince me that they are well informed on these topics, i'm all ears, but doing it here is imho a bad idea so it's on you if you try)

pimlottc|6 months ago

This has very little to do with Israel/Hamas. It could be false information about a lewd act, a violent crime, a racist comment, an affair, gross incompetence, a medical condition, religious blasphemy, etc, etc, etc.

People make judgments about people based on second hand information. That is just how people work.

slg|6 months ago

>because all legitimate opinions about Israel/Hamas are very misinformed/cherry picked

Sure, there is plenty of misinformation being thrown in multiple different directions, but if you think literally "all legitimate opinions" are "misinformed/cherry picked", then odds are you are just looking at the issue through your own misinformed frame of reference.

Aeolun|6 months ago

> how about stop forming judgments of people based on their stance on Israel/Hamas

I really don’t need to do much more than compare ‘number of children killed’ between Israel and Palestine to see who is on the right side of history here. I’ll absolutely form judgements of people based on how they feel about that.