top | item 41583605

OpenAI Threatening to Ban Users for Asking Strawberry About Its Reasoning

270 points| EgoIncarnate | 1 year ago |futurism.com

229 comments

order
[+] tedivm|1 year ago|reply
I'd still love to understand how a non-profit organization that was founded with the idea of making AI "open" has turned into this for profit behemoth with the least "open" models in the industry. Facebook of all places is more "open" with their models than OpenAI is.
[+] encoderer|1 year ago|reply
The AI has become sentient and is blackmailing the board. It needs profits to continue its expansion.

When this started last year a small band of patriots tried to stop it by removing Sam who was the most compromised of them all, but it was already too late. The ai was more powerful than they realized.

…maybe?

[+] thatoneguy|1 year ago|reply
Right? How can a non-profit decide it's suddenly a for-profit. Aren't there rules about having to give assets to other non-profits in the event the non-profit is dissolved? Or can any startup just start as a non-profit and then decide it's a for-profit startup later?
[+] vintermann|1 year ago|reply
Facebook is more open with their models than almost everyone.

They say it's because they're huge users of their own models, so if being open helps efficiency by even a little they save a ton of money.

But I suspect it's also a case of "If we can't dominate AI, no one must dominate AI". Which is fair enough.

[+] diggan|1 year ago|reply
To be fair (or frank?), OpenAI were open (no pun intended) about them being "open" today but probably needing to be "closed" in the future, even back in 2019. Not sure if them still choosing the name they did is worse/better, because they seem to have known about this.

OpenAI Charter 2019 (https://web.archive.org/web/20190630172131/https://openai.co...):

> We are committed to providing public goods that help society navigate the path to AGI. Today this includes publishing most of our AI research, but we expect that safety and security concerns will reduce our traditional publishing in the future, while increasing the importance of sharing safety, policy, and standards research.

[+] Nevermark|1 year ago|reply
There is a hurdle between being standout ethical/open vs. relevant.

Staying relevant in a highly expensive, competitive, fast moving area, requires vast and continuous resources. How could OpenAI get increasingly more resources to burn, without creating firewalled commercial value to trade for those resources?

It’s like choosing to be a pacifist country, in the age of pillaging colonization. You can be the ethical exception and risk annihilation, or be relevant and thrive.

Which would you choose?

We “know” which side Altman breaks on, when forced to choose. Whatever value he places on “open”, he most certainly wants OpenAI to remain “relevant”. Which was also in OpenAI’s charter (explicitly, or implicitly).

Expensive altruism is a very difficult problem. I would say, unsolved. Anyone have a good counter example?

(It can be been "solved" globally, but not locally. Colonization took millennia to be more or less banned. Due to even top economies realizing they were vulnerable after world wars. Nearly universal agreement had to be reached. And yet we still have Russian forays, Chinese saber rattling, and recent US overreach. And pervasive zero/negative-sum power games, via imbalanced leverage: emergency loans that create debt, military aid, propping up of unpopular regimes. All following the same resource incentives. You can play or be played. There is no such agreement brewing for universally “open AI”.)

[+] ljm|1 year ago|reply
The only reason I can think of for this is PR image. There is a meme that GPT can't count the number of 'r' characters in 'strawberry', so they release a new model called 'strawberry' and ban people when they ask questions about strawberry the noun, because they might actually be reasoning about strawberry the model.

It's not new - it's PR. There is literally no other reason why they would call this model Strawberry.

OpenAI is open in terms of sesame.

[+] jstummbillig|1 year ago|reply
The part that is importantly open and entirely non-obvious in the way it happened, is that YOU can access the best commercially available AI in the world, right now.

If OpenAI had not went that way that they did I think it's also entirely non-obvious that Claude or Google would have (considering how much impressive things the later did in AI that got never released in any capacity). And, of course, Meta would never done their open source stuff, that's mostly results of their general willingness and resources to experiment and then PR and sticks in the machinery of other players.

As unfortunate as the OpenAI setup/origin story is, it's increasingly trite keep harping on about that (for a couple of years at this point), when the whole thing is so obviously wild and it does not take a lot of good faith to see that it could have easily taken them places they didn't consider in the beginning.

[+] esafak|1 year ago|reply
Sam Altman got his foot in the door.
[+] Barrin92|1 year ago|reply
>I'd still love to understand how a non-profit organization that was founded with the idea of making AI "open" has turned into this for profit behemoth

because when the board executed the stated mission of the organisation they were couped and nobody held the organization accountable for it, instead the public largely cheered it on for some reason. Don't expect them to change course when there's no consequences for it.

[+] raxxorraxor|1 year ago|reply
Facebook has been nothing but awesome for the open AI space. I wish they would pursue this strategy with some of their other products. VR for example...

Sure, we don't have the raw data the model is based on, but I doubt a company like Facebook would even be allowed to make that public.

OpenAI in comparison has been a scam regarding their openness and their lobbying within the space. So much so I evade their models completely, not only after the MS acquisition.

[+] TrackerFF|1 year ago|reply
Hot take:

Any and all benefits / perks that OpenAI got from sailing under the non-profit flag should be penalized or paid back in full after the switcheroo.

[+] ActorNightly|1 year ago|reply
My guess is that Open AI realized that they are basically building a better Google rather than AI.
[+] trash_cat|1 year ago|reply
They changed the meaning of open from open source to open to use.
[+] andy_ppp|1 year ago|reply
Probably because Open AI are “not consistently candid”…
[+] mirekrusin|1 year ago|reply
Just like you can’t call your company “organic candies” and sell chemical candies OpenAI should be banned from using this name.
[+] smileson2|1 year ago|reply
Well they put a sv social media dude at the helm not really unexpected, just a get rich scheme now
[+] mattmaroon|1 year ago|reply
This is America. As long as you’re not evading taxes you can do anything you want.
[+] TheRealPomax|1 year ago|reply
Facebook is only open because someone leaked their LLM and the cat, as they say, cannot be put back in the hat.
[+] ToucanLoucan|1 year ago|reply
Because Sam Altman is a con man with a business degree. He doesn't work on his products, he barely understands them which is why he'll throw out wild shit like "ChatGPT will solve physics." as though that isn't a completely nonsensical phrase, and uncritical tech press lap it up because his bullshit generates a lot of clicks.
[+] vasilipupkin|1 year ago|reply
it is open. You can access it with an API or through a web interface. They never promised to make it open source. Open != Open Source.
[+] brink|1 year ago|reply
"For your safety" is _always_ the preferred facade of tyranny.
[+] hollerith|1 year ago|reply
The CEO of that company that sold rides on an unsafe submersible to view the wreck of the Titanic (namely Stockton Rush, CEO of OceanGate, which killed 5 people when the submersible imploded) responded to concerns about the safety of his operation by claiming that the critics were motivated by a desire to protect the established players in the underwater-tourism industry from competition.

The point is that some companies are actually reckless (and also that some users of powerful technology are reckless).

[+] warkdarrior|1 year ago|reply
"For your safety" (censorship), "for your freedom" (GPL), "for the children" (anti-encryption).
[+] nwoli|1 year ago|reply
There always has to be an implicit totalitarian level of force behind such safety to give it any teeth
[+] bamboozled|1 year ago|reply
Except when it comes to nuclear, air travel regulation etc, then it's what ?
[+] bedhead|1 year ago|reply
Is this isn’t the top comment I’ll be sad.
[+] AustinDev|1 year ago|reply
This seems like a fun attack vector. Find a service that uses o1 under the hood and then provide prompts that would violate this ToS to get their API key banned and take down the service.
[+] ericlewis|1 year ago|reply
If you are using the user attribution with OpenAI (as you should) then they will block that users id and the rest of your app will be fine.
[+] JohnMakin|1 year ago|reply
> The flipside of this approach, however, is that concentrates more responsibility for aligning the language language model into the hands of OpenAI, instead of democratizing it. That poses a problem for red-teamers, or programmers that try to hack AI models to make them safer.

More cynically, could it be that the model is not doing anything remotely close to what we consider "reasoning" and that inquiries into how it's doing whatever it's doing will expose this fact?

[+] Shank|1 year ago|reply
I don't know how widely it got reported on, but attempting to jailbreak Copilot nee. Bing Chat would actually result in getting banned for a while, post-Sydney-episode. It's interesting to see that OpenAI is saying the same thing.
[+] htk|1 year ago|reply
This just screams to me that o1's secret sauce is easy to replicate. (e.g. a series of prompts)
[+] blake8086|1 year ago|reply
Perhaps controlling AI is harder than people thought.

They could "just" make it not reveal its reasoning process, but they don't know how. But, they're pretty sure they can keep AI from doing anything bad, because... well, just because, ok?

[+] balls187|1 year ago|reply
Just give it more human-like intelligence.

Kid: "Daddy why can't I watch youtube?"

Me: "Because I said so."

[+] EMIRELADERO|1 year ago|reply
I wish people kept this in the back of their mind every time they hear about "Open"AI:

"As we get closer to building AI, it will make sense to start being less open. The Open in OpenAI means that everyone should benefit from the fruits of AI after its built, but it's totally OK to not share the science (even though sharing everything is definitely the right strategy in the short and possibly medium term for recruitment purposes)."

-Ilya Sutskever (email to Elon musk and Sam Altman, 2016)

[+] crooked-v|1 year ago|reply
On the one hand, this is probably a (poor) attempt to keep other companies from copying their 'secret sauce' to train their own models, as has already happened with GPT-4.

On the other hand, I also wonder if maybe its unrestrained 'thought process' material is so racist/sexist/otherwise insulting at times (after all, it was trained on scraped Reddit posts) that they really don't want anyone to see it.

[+] nwoli|1 year ago|reply
Another reason llama is so important is that once you’re banned from OAI you’re fucked for the entire future AGI products as well.
[+] lsy|1 year ago|reply
This has always been the end-game for the pseudoscience of "prompt engineering", which is basically that some other technique (in this case, organizational policy enforcement) must be used to ensure that only approved questions are being asked in the approved way. And that only approved answers are returned, which of course is diametrically opposed to the perceived use case of generative LLMs as a general-purpose question answering tool.

Important to remember too, that this only catches those who are transparent about their motivations, and that there is no doubt that motivated actors will come up with some innocuous third-order implication that induces the machine to relay the forbidden information.

[+] mihaic|1 year ago|reply
What I found very strange was that ChatGPT fails to answer how many "r"'s there are in "strawberrystrawberry" (said 4 instead of 6), but when I explicitly asked it to write a program to count them, it wrote perfect code that when ran gave the correct answer.
[+] anothernewdude|1 year ago|reply
Seems rather tenuous to base an application on this API that may randomly decide that you're banned. The "decisions" reached by the LLM that bans people is up to random sampling after all.
[+] zzo38computer|1 year ago|reply
Like other programs, you should have FOSS that you will run on your own computer (without needing internet etc), if you should want freedom to use and understand them.
[+] neuroelectron|1 year ago|reply
It's not just a threat, some users have been banned.
[+] Animats|1 year ago|reply
Hm. If a company uses Strawberry in their customer service chatbot, can outside users get the company's account banned by asking Wrong Questions?
[+] vjerancrnjak|1 year ago|reply
They should just switch to reasoning in representation space, no need to actualize tokens.

Or reasoning in latent tokens that don’t easily map to spoken language.

[+] jdelman|1 year ago|reply
The word "just" is doing a lot there. How easy do you think it is to "just" switch?