There's an old saying, that the legal definition of pornography is "whatever excites the judge" (in the court case in which that judge has to decide "is X pornography or not?").
This may become a very similar situation...but with the legal definition in questions being "libel".
Libel has a pretty clear definition in most legal systems, and how it was generated doesn't come into play. What makes you think there's any real ambiguity here?
> As largely unregulated artificial intelligence software such as ChatGPT, Microsoft’s Bing and Google’s Bard begins to be incorporated across the web, its propensity to generate potentially damaging falsehoods raises concerns about the spread of misinformation — and novel questions about who’s responsible when chatbots mislead.
Defense counsel can try to claim it's a novel question.
But, ideally, a reckless company will nevertheless be bankrupted by a barrage of lawsuits that show they're knowingly operating an automated libel machine.
I'm looking especially at Microsoft, who has a few-decades history of getting away with behavior they shouldn't, which might've emboldened them to launch an information-fabricating demo under their information search brand.
Hopefully Google/Alphabet will be smarter and less-evil than Microsoft on this.
IMHO, OpenAI needs to be ready to push back against Microsoft, or they might as well have been a Microsoft long-con all along.
> Defense counsel can try to claim it's a novel question.
But, ideally, a reckless company will nevertheless be bankrupted by a barrage of lawsuits that show they're knowingly operating an automated libel machine.
Yeah, I find this sort of argument very transparent, even if it seems to have become popular in the last years.
If I build some sort of autonomous killer robot, let it loose on the streets and then waxed philosophically "well, who is responsible for the murders, me or the robot???" then I'm pretty sure most people - and most courts - would have a rather clear opinion how to answer that question.
Important part of the article copied here, because potential paywall:
> One night last week, the law professor Jonathan Turley got a troubling email. As part of a research study, a fellow lawyer in California had asked the AI chatbot ChatGPT to generate a list of legal scholars who had sexually harassed someone. Turley’s name was on the list.
> The chatbot, created by OpenAI, said Turley had made sexually suggestive comments and attempted to touch a student while on a class trip to Alaska, citing a March 2018 article in The Washington Post as the source of the information. The problem: No such article existed. There had never been a class trip to Alaska. And Turley said he’d never been accused of harassing a student.
> A regular commentator in the media, Turley had sometimes asked for corrections in news stories. But this time, there was no journalist or editor to call — and no way to correct the record.
> “It was quite chilling,” he said in an interview with The Post. “An allegation of this kind is incredibly harmful.”
HN might become a support forum for ChatGPT issues. People here complain about Google (or any other big co) suspending their accounts and when their posts gets popular someone from Big Co reverses the decision.
We will have similar posts here about ChatGPT ruining someone's life and then someone from Open AI will do something if their posts gets lot of upvotes.
Well, I think you would want that debate, once your name would be consistently associated with sex crimes. Soon integrated into bing, one prompt away for every common person, to know all about you, whether they know about the limitations of LLMs or not (and allmost nobody reads disclaimers).
So we are having this debate now, because it is quite new and society is not clear on how to deal with it.
So yes people need to learn about the limitations and we need to figure out the responsibilities and put pressure on the companies, so they provide a way for people to clear their name, if they consistently have something wrong about them.
(I am not talking about weird prompts, there was no weird promot here)
The Volokh Conspiracy blog has in the past week or so hosted thousands of words on the subject of liability for libel by the AI maintainers. (reason.com/volokh)
As a follower of the blog I have mostly skimmed because of the overwhelming volume, but it may interest some. I gather the author believes that a judgement could be obtained under current US laws.
ChatGPT is not a Markov chain, rather it's a large language model. On a very high level, it tries to predict the next word in a sentence such that it most closely resembles the human written speech it was trained on.
Markov chains, however, are higher order statistical models. For example, they can predict the average waiting time at a service point based on the current state of the waiting queue. As opposed to just averaging over a bunch of values, Markov chains base their predictions by applying stochastic models to higher orders (i.e. accounting for current and previous states), yielding potentially higher accuracy.
It hallucinated bonus answers. While contextually a serious issue, it is a known potential outcome of the system. Hence the human operator. Without that, there is no bullshit detector.
So the bullshit detector, instead of simply letting it remain bullshit, sent it to the subject of the hallucination.
As the subject was so bothered by the hallucination, even the publication “cited” got involved.
But there’s no story here. Someone with a significant name had their name randomly selected for a hallucination.
That's precisely the problem - far too many people have lousy bullshit detectors.
And on top of that - they either genuinely believe that "AI is finally here", and systems like ChatGPT are true oracles. If it walks like a duck and talks like a duck, right?
> Someone with a significant name had their name randomly selected for a hallucination.
It seems that the story consists of, essentially, “Is this thing (which happened for the first time but that will presumably happen all of the time now) a tort?” In the context of a multibillion dollar company like Microsoft this is actually quite a story.
I agree with another chatter in this thread about the issue being OpenAI advertising what ChatGPT actually does.
I'm as anti-AI-hype as they come, but a lot of these articles seem to stem from peoples' expectations about ChatGPT results, namely that it tells the truth all the time.
Of course it doesn't - and it never will. There will always need to be a human to verify what its saying is actually true, and it's why most AI tools will never be anything more than an assistant to a human operator.
People believing it at face value will simply allow for the next generation of the fake news we've all come to know and love.
Why do you think people expect ChatGPT will tell the truth? Because people need a tool that tells them the truth. If that expectation changes, then ChatGPT becomes useless to many people because it doesn't do what they need it to. And those people certainly won't pay for a tool that doesn't do what they need. An answerbot that might hallucinate an answer is less useful than a search engine, because it is harder to fact check or judge a source. Or using one for customer service tasks; how do you supervise its responses, or just let it run wild potentially damaging your brand and landing you in legal trouble?
It is OpenAI's responsibility to advertise ChatGPT correctly. They are letting the misinformation / misunderstanding about spread because it's helping them sell their tool.
ChatGPT is not a fact database, they should be very very clear about it everywhere. But they are not doing it. Or at least not doing it as loudly as they should intentionally.
It gives you the following warning in a large prompt in the middle of the screen that you must actively dismiss before you can use the product.
"While we have safeguards in place, the system may occasionally generate incorrect or misleading information and produce offensive or biased content. It is not intended to give advice."
If that isn't being clear, I don't know what would be.
The medium by which it happens can make it more interesting for news.
For example, probably every day people do assault with fist and it doesn't go to the news. But if they do assault with katana then it would go to the news. Same with inventing scandal using an unusual medium.
[+] [-] dang|3 years ago|reply
Defamed by ChatGPT - https://news.ycombinator.com/item?id=35468540
plus this similar case:
ChatGPT: Mayor starts legal bid over false bribery claim - https://news.ycombinator.com/item?id=35471211 - April 2023 (74 comments)
[+] [-] yawnxyz|3 years ago|reply
[+] [-] tmpz22|3 years ago|reply
[+] [-] headsupftw|3 years ago|reply
[+] [-] bell-cot|3 years ago|reply
There's an old saying, that the legal definition of pornography is "whatever excites the judge" (in the court case in which that judge has to decide "is X pornography or not?").
This may become a very similar situation...but with the legal definition in questions being "libel".
[+] [-] GavinMcG|3 years ago|reply
[+] [-] neilv|3 years ago|reply
Defense counsel can try to claim it's a novel question.
But, ideally, a reckless company will nevertheless be bankrupted by a barrage of lawsuits that show they're knowingly operating an automated libel machine.
I'm looking especially at Microsoft, who has a few-decades history of getting away with behavior they shouldn't, which might've emboldened them to launch an information-fabricating demo under their information search brand.
Hopefully Google/Alphabet will be smarter and less-evil than Microsoft on this.
IMHO, OpenAI needs to be ready to push back against Microsoft, or they might as well have been a Microsoft long-con all along.
[+] [-] xg15|3 years ago|reply
But, ideally, a reckless company will nevertheless be bankrupted by a barrage of lawsuits that show they're knowingly operating an automated libel machine.
Yeah, I find this sort of argument very transparent, even if it seems to have become popular in the last years.
If I build some sort of autonomous killer robot, let it loose on the streets and then waxed philosophically "well, who is responsible for the murders, me or the robot???" then I'm pretty sure most people - and most courts - would have a rather clear opinion how to answer that question.
[+] [-] fnimick|3 years ago|reply
> One night last week, the law professor Jonathan Turley got a troubling email. As part of a research study, a fellow lawyer in California had asked the AI chatbot ChatGPT to generate a list of legal scholars who had sexually harassed someone. Turley’s name was on the list.
> The chatbot, created by OpenAI, said Turley had made sexually suggestive comments and attempted to touch a student while on a class trip to Alaska, citing a March 2018 article in The Washington Post as the source of the information. The problem: No such article existed. There had never been a class trip to Alaska. And Turley said he’d never been accused of harassing a student.
> A regular commentator in the media, Turley had sometimes asked for corrections in news stories. But this time, there was no journalist or editor to call — and no way to correct the record.
> “It was quite chilling,” he said in an interview with The Post. “An allegation of this kind is incredibly harmful.”
[+] [-] IronWolve|3 years ago|reply
[+] [-] paddw|3 years ago|reply
(Probably, I suppose.)
[+] [-] sumedh|3 years ago|reply
We will have similar posts here about ChatGPT ruining someone's life and then someone from Open AI will do something if their posts gets lot of upvotes.
[+] [-] hutzlibu|3 years ago|reply
So we are having this debate now, because it is quite new and society is not clear on how to deal with it.
So yes people need to learn about the limitations and we need to figure out the responsibilities and put pressure on the companies, so they provide a way for people to clear their name, if they consistently have something wrong about them.
(I am not talking about weird prompts, there was no weird promot here)
[+] [-] bgun|3 years ago|reply
[+] [-] firstlink|3 years ago|reply
As a follower of the blog I have mostly skimmed because of the overwhelming volume, but it may interest some. I gather the author believes that a judgement could be obtained under current US laws.
[+] [-] williamcotton|3 years ago|reply
[+] [-] PebblesRox|3 years ago|reply
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] hcarvalhoalves|3 years ago|reply
We’ll get news like this for a few years until the hype dies off I guess.
[+] [-] jdthedisciple|3 years ago|reply
Markov chains, however, are higher order statistical models. For example, they can predict the average waiting time at a service point based on the current state of the waiting queue. As opposed to just averaging over a bunch of values, Markov chains base their predictions by applying stochastic models to higher orders (i.e. accounting for current and previous states), yielding potentially higher accuracy.
[+] [-] bbstats|3 years ago|reply
[+] [-] catchnear4321|3 years ago|reply
So the bullshit detector, instead of simply letting it remain bullshit, sent it to the subject of the hallucination.
As the subject was so bothered by the hallucination, even the publication “cited” got involved.
But there’s no story here. Someone with a significant name had their name randomly selected for a hallucination.
[+] [-] lisasays|3 years ago|reply
And on top of that - they either genuinely believe that "AI is finally here", and systems like ChatGPT are true oracles. If it walks like a duck and talks like a duck, right?
[+] [-] singleshot_|3 years ago|reply
It seems that the story consists of, essentially, “Is this thing (which happened for the first time but that will presumably happen all of the time now) a tort?” In the context of a multibillion dollar company like Microsoft this is actually quite a story.
[+] [-] thro1|3 years ago|reply
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] Overtonwindow|3 years ago|reply
[+] [-] TobyTheDog123|3 years ago|reply
I'm as anti-AI-hype as they come, but a lot of these articles seem to stem from peoples' expectations about ChatGPT results, namely that it tells the truth all the time.
Of course it doesn't - and it never will. There will always need to be a human to verify what its saying is actually true, and it's why most AI tools will never be anything more than an assistant to a human operator.
People believing it at face value will simply allow for the next generation of the fake news we've all come to know and love.
[+] [-] stubish|3 years ago|reply
[+] [-] djaouen|3 years ago|reply
[+] [-] A_non_e-moose|3 years ago|reply
[+] [-] smusamashah|3 years ago|reply
[+] [-] JimmyAustin|3 years ago|reply
"While we have safeguards in place, the system may occasionally generate incorrect or misleading information and produce offensive or biased content. It is not intended to give advice."
If that isn't being clear, I don't know what would be.
[+] [-] owenpalmer|3 years ago|reply
[+] [-] ArnoVW|3 years ago|reply
This sort of article/post helps us (as a society, not just HN) adjust those priors re AI. Because "computer says no" is a real thing.
[+] [-] tablespoon|3 years ago|reply
Commit libel and defamation? Sure, but that doesn't excuse this in the slightest.
It also speaks to the reliability of ChatGPT.
[+] [-] ftxbro|3 years ago|reply
For example, probably every day people do assault with fist and it doesn't go to the news. But if they do assault with katana then it would go to the news. Same with inventing scandal using an unusual medium.
[+] [-] throwaway22032|3 years ago|reply
Any of us could write a comment here, with fake citations, and "invent" (hallucinate) whatever we want.