top | item 36717593

FTC investigating ChatGPT over potential consumer harm

138 points| cratermoon | 2 years ago |npr.org | reply

138 comments

order
[+] andrewstuart|2 years ago|reply
Sam Altman himself sat in front of the top levels of government and freaked them out by telling them his company was putting in place the groundwork to possibly end humanity. A really strange thing to do, I remain puzzled as to why any CEO would stir up the government like this.

Having panicked regulators with science fiction, I hope he's not surprised when they take action.

[+] somenameforme|2 years ago|reply
I think something that's increasingly clear is that OpenAI's only moat is time. Every other company, and even completely independent models, are rapidly catching up. And this is happening at the same time that OpenAI is already making claims that they're hitting diminishing returns on model size [1], as happens in literally every single neural net based field.

If Sam's Gambit succeeded, OpenAI could have potentially been granted a near absolute monopoly with the legislative reach of the US government working to imperil competitors through a Gordian knot of rules and regulations which OpenAI could have been the primary creator of, perhaps even as the head of some sort private-public 'Artificial Intelligence Accountability and Trust Division.' It really just gives one that happy feeling of bureaucrazy mixed with dystopia.

[1] - https://www.wired.com/story/openai-ceo-sam-altman-the-age-of...

[+] darklycan51|2 years ago|reply
I think he just wanted to enact restrictions on further training of data models (maybe) and make it so that no one could compete with his company, can't see another explanation.
[+] cmcaleer|2 years ago|reply
It was a gambit to try to secure OAI's moat. It failed, and while competitors are catching up (still a while to go yet), he put a target on OAI's back.
[+] biofunsf|2 years ago|reply
Maybe he's just a CEO doing the ethically responsible thing and not the purely self-serving thing for the sake of shareholders? (though as humans, the shareholders have some stake in humanity too)
[+] NoTelnyxBad|2 years ago|reply
There's a simple explanation: he believes what he said.
[+] godelski|2 years ago|reply
> I remain puzzled as to why any CEO would stir up the government like this.

I have a belief, as a ML researcher, that there are two types of x-risk AI/ML researchers. True believers and hype-men. People do hype the x-risk because it is catchy and gets more people talking. The whole "no news is bad news" strategy, which even worked in recent elections. I think there's two dangers to these people: 1) they eventually turn into true believers (you say something enough, you start to believe it), and 2) it distracts from the risks of the current dangers. For Sam, I think he is a true believer and I'd expect this out of any CEO who spends significant amounts of time hyping people up to gather capital by promising a future AGI.

As for myself, I'm not buying the x-risk arguments. There's a lot I have to say and a lot of nuance, but to put it briefly I'll reference something Mitchel said. She noted how it seems rather unlikely that a super-intelligence, who outperforms us in every single way is also unable to understand intention behind instructions (meaning it doesn't understand us, so a contradiction in super-intelligence). The danger is handing things over to a model without human supervision and letting it hallucinate. So the danger is thinking it is smarter than it actually is, and then trusting it. Concentrating on ineffable abilities of super-intelligences distracts us from this danger, which already exists.

[+] throwawayadvsec|2 years ago|reply
>A really strange thing to do, I remain puzzled as to why any CEO would stir up the government like this.

Because it's a very serious possibility(that the singularity could end humanity), and a significant part of the people who are serious about AI are extremely worried about alignment, for good reasons.

> I hope he's not surprised when they take action.

Surprised? Do you mean relieved?

[+] onlyrealcuzzo|2 years ago|reply
> I remain puzzled as to why any CEO would stir up the government like this.

Because 1) the government isn't going to do anything and 2) investors will think OMG this must be profitable $$$

[+] guy98238710|2 years ago|reply
Marketing. OpenAI needs to create illusion that their chatbot is more than just gradual evolution of past models that still needs a lot of work.
[+] xg15|2 years ago|reply
I don't know if SA is an Effective Altruist, but it feels a bit like he got caught up in their weird logic of "AI will be an extinction-level threat to humanity - and the only way to address the threat is somehow to build exactly that AI".
[+] sensanaty|2 years ago|reply
> I remain puzzled as to why any CEO would stir up the government like this

Because he sold his soul to the Devil (M$) and is now pursuing regulatory capture in order to build a "Open"AI-controlled moat where no competitors can survive.

[+] m3kw9|2 years ago|reply
He wanted heavy regulations on AI which would have benefited his moat
[+] lyu07282|2 years ago|reply
He wants regulatory capture obviously
[+] Tenoke|2 years ago|reply
> with science fiction

Because he doesn't believe it's Science Fiction, you are projecting.

[+] justrealist|2 years ago|reply
That is completely unrelated to the FTC complaint.
[+] janalsncm|2 years ago|reply
Well that’s one way to solve AI alignment. Just sue the companies for harming consumers.

I’m half joking, but in the statistical distribution of good, bad and ugly things automation can do, we will need to draw some line as to what is legally actionable. For example, if I Google search recipes for explosives, Google isn’t liable for surfacing accurate information. And imo OpenAI isn’t liable if their chatbot gives the same accurate info in response to the same query in the form of a chat message rather than a link.

[+] flangola7|2 years ago|reply
One of the differences is Google isn't originally writing that text themselves, they're pointing to someone else who has written it. In a potential lawsuit Google would be able to solidly point to the domain and ownership of the actual author of the text. If (when) OpenAI ends up in court, who will they point to? If someone was slandered, the courts will find a human being responsible for it.
[+] throwaway_ab|2 years ago|reply
Sam Altman has put a target on every AI company/researchers back with his constant fearmongering when he has spoken directly to the government.

In his selfish efforts to build the ultimate moat around his company, Altman's attempt to get government regulation to stifle future competition might backfire.

Whilst I'd normally say it would be satisfying to see Sam fail due to his selfishness unfortunately this laser focus from the gov might bring the rest of the industry down with him.

[+] vouaobrasil|2 years ago|reply
I hope he succeeds, and stifles AI development. AI is too advanced for humans to use properly and it probably will end humanity. If the rest of the industry is brought down, I for one will be happy.
[+] m3kw9|2 years ago|reply
Has google ever been investigated over surfaced search results that could cause harm?
[+] vouaobrasil|2 years ago|reply
It should be investigated.
[+] I_am_tiberius|2 years ago|reply
ChatGPT is a privacy nightmare. When you disable "Chat history & training", this setting is stored client side in local storage. That means the setting gets lost when switching browser or deleting your browser cache.

Also, when you disable "Chat history & training", plugins are not available to you as a paid user.

I believe Sam Altman doesn't care about his users' privacy at all. I lost trust in him completely, when he first mentioned that he doesn't understand why decentralized Finance might be useful to people. Some months later, when Silicon Valley bank was in the process of shutting down, he started crying. At the same time he supports Worldcoin.

[+] blharr|2 years ago|reply
The plug-ins part makes total sense to me. I would reason that plug-ins can save your chat history and use them for their own data, so if you want to keep your privacy totally, plug-ins would ve disabled as well.
[+] guy98238710|2 years ago|reply
This is such a great time to be a lawyer. Just play any AI like a slot machine until it gives you incorrect results and then sue. Probability is a bitch.
[+] shmde|2 years ago|reply
But dont they have clauses like. Our models sometimes hallucinate, give out wrong answers. Basically use at your own risk and cannot sue us if it hoes wrong
[+] exabrial|2 years ago|reply
I wish the FTC/SEC would break up Facebook, Google, and Apple. ChatGPT is small fish.
[+] nonethewiser|2 years ago|reply
> The agency says it's looking into whether the AI tool has harmed people by generating incorrect information about them

Havent heard this complaint before. And I wouldn’t really call this harmful to consumers. If anything its harmful to the subject, but consumers in general. But even so it doesnt meet the definition of libel. How harmful is it really?

Of all the complaints against OpenAI, absolutely none of them seem to be what I think really matters. And that is OpenAI tweaking it to be politically correct and geared to wherever they fall on the overton window. Its a tool. It should be sharp. And we should expect more than of its users leat we want dumber ones.

[+] silisili|2 years ago|reply
It could absolutely cause real harm to any one person or group at a time. There was a recent case where it accused a professor of sexual harassment, seemingly out of nowhere. Imagine it spreading on Twitter and getting mobbed, doxxed, etc. Or a potential employer seeing it and silently rejecting your employment.

Excuse the link, the big papers have awful popovers, this one seemed acceptable...

https://decrypt.co/125712/chatgpt-wrongly-accuses-law-profes...

[+] my_usernam3|2 years ago|reply
I can only give you an antidote of its dangers in a different category.

My significant other, an otherwise very intelligent women, will ask ChatGPT health questions. She knows it might be wrong, but does it anyways to debug her health. I try to point out that even getting suggested a bad diagnosis is very dangerous. The advice it gives has way less nuanced than say the healthMD which has it owns flaws. And unlike coding questions, you can't assume health advice is right until you prove it wrong.

[+] Eji1700|2 years ago|reply
> How harmful is it really?

You seem to assume it's negative information about people. If it decides to say that some quack hocking raw almonds as cancer cures is FDA approved it's extremely harmful.

Hopefully it's not that bad, but there's clearly a large inbetween here and I wouldn't be surprised if it's over the line.

That's before you even get into defamation cases.

[+] throwawaaarrgh|2 years ago|reply
Levine's Law: Every bad thing a public company does is securities fraud. (and also bank fraud, because companies borrow money from banks)
[+] throwawaaarrgh|2 years ago|reply
Why do I always get downvoted for saying this? Like it's not true?
[+] nubinetwork|2 years ago|reply

[deleted]

[+] lolinder|2 years ago|reply
Posting links to threads that have comments can be quite helpful, but half of those didn't get noticed at all and have zero comments, so it feels more like you're trying to make a point rather than help people find related conversations.