I'd still love to understand how a non-profit organization that was founded with the idea of making AI "open" has turned into this for profit behemoth with the least "open" models in the industry. Facebook of all places is more "open" with their models than OpenAI is.
The AI has become sentient and is blackmailing the board. It needs profits to continue its expansion.
When this started last year a small band of patriots tried to stop it by removing Sam who was the most compromised of them all, but it was already too late. The ai was more powerful than they realized.
Right? How can a non-profit decide it's suddenly a for-profit. Aren't there rules about having to give assets to other non-profits in the event the non-profit is dissolved? Or can any startup just start as a non-profit and then decide it's a for-profit startup later?
To be fair (or frank?), OpenAI were open (no pun intended) about them being "open" today but probably needing to be "closed" in the future, even back in 2019. Not sure if them still choosing the name they did is worse/better, because they seem to have known about this.
> We are committed to providing public goods that help society navigate the path to AGI. Today this includes publishing most of our AI research, but we expect that safety and security concerns will reduce our traditional publishing in the future, while increasing the importance of sharing safety, policy, and standards research.
There is a hurdle between being standout ethical/open vs. relevant.
Staying relevant in a highly expensive, competitive, fast moving area, requires vast and continuous resources. How could OpenAI get increasingly more resources to burn, without creating firewalled commercial value to trade for those resources?
It’s like choosing to be a pacifist country, in the age of pillaging colonization. You can be the ethical exception and risk annihilation, or be relevant and thrive.
Which would you choose?
We “know” which side Altman breaks on, when forced to choose. Whatever value he places on “open”, he most certainly wants OpenAI to remain “relevant”. Which was also in OpenAI’s charter (explicitly, or implicitly).
Expensive altruism is a very difficult problem. I would say, unsolved. Anyone have a good counter example?
(It can be been "solved" globally, but not locally. Colonization took millennia to be more or less banned. Due to even top economies realizing they were vulnerable after world wars. Nearly universal agreement had to be reached. And yet we still have Russian forays, Chinese saber rattling, and recent US overreach. And pervasive zero/negative-sum power games, via imbalanced leverage: emergency loans that create debt, military aid, propping up of unpopular regimes. All following the same resource incentives. You can play or be played. There is no such agreement brewing for universally “open AI”.)
The only reason I can think of for this is PR image. There is a meme that GPT can't count the number of 'r' characters in 'strawberry', so they release a new model called 'strawberry' and ban people when they ask questions about strawberry the noun, because they might actually be reasoning about strawberry the model.
It's not new - it's PR. There is literally no other reason why they would call this model Strawberry.
The part that is importantly open and entirely non-obvious in the way it happened, is that YOU can access the best commercially available AI in the world, right now.
If OpenAI had not went that way that they did I think it's also entirely non-obvious that Claude or Google would have (considering how much impressive things the later did in AI that got never released in any capacity). And, of course, Meta would never done their open source stuff, that's mostly results of their general willingness and resources to experiment and then PR and sticks in the machinery of other players.
As unfortunate as the OpenAI setup/origin story is, it's increasingly trite keep harping on about that (for a couple of years at this point), when the whole thing is so obviously wild and it does not take a lot of good faith to see that it could have easily taken them places they didn't consider in the beginning.
>I'd still love to understand how a non-profit organization that was founded with the idea of making AI "open" has turned into this for profit behemoth
because when the board executed the stated mission of the organisation they were couped and nobody held the organization accountable for it, instead the public largely cheered it on for some reason. Don't expect them to change course when there's no consequences for it.
Facebook has been nothing but awesome for the open AI space. I wish they would pursue this strategy with some of their other products. VR for example...
Sure, we don't have the raw data the model is based on, but I doubt a company like Facebook would even be allowed to make that public.
OpenAI in comparison has been a scam regarding their openness and their lobbying within the space. So much so I evade their models completely, not only after the MS acquisition.
They never intended to be open or share any of their impactful research. It was a trick the entire time to attract talent. The emails they shared as part of the Elon Musk debacle prove this: https://openai.com/index/openai-elon-musk/
Because Sam Altman is a con man with a business degree. He doesn't work on his products, he barely understands them which is why he'll throw out wild shit like "ChatGPT will solve physics." as though that isn't a completely nonsensical phrase, and uncritical tech press lap it up because his bullshit generates a lot of clicks.
The CEO of that company that sold rides on an unsafe submersible to view the wreck of the Titanic (namely Stockton Rush, CEO of OceanGate, which killed 5 people when the submersible imploded) responded to concerns about the safety of his operation by claiming that the critics were motivated by a desire to protect the established players in the underwater-tourism industry from competition.
The point is that some companies are actually reckless (and also that some users of powerful technology are reckless).
This seems like a fun attack vector. Find a service that uses o1 under the hood and then provide prompts that would violate this ToS to get their API key banned and take down the service.
> The flipside of this approach, however, is that concentrates more responsibility for aligning the language language model into the hands of OpenAI, instead of democratizing it. That poses a problem for red-teamers, or programmers that try to hack AI models to make them safer.
More cynically, could it be that the model is not doing anything remotely close to what we consider "reasoning" and that inquiries into how it's doing whatever it's doing will expose this fact?
I don't know how widely it got reported on, but attempting to jailbreak Copilot nee. Bing Chat would actually result in getting banned for a while, post-Sydney-episode. It's interesting to see that OpenAI is saying the same thing.
Perhaps controlling AI is harder than people thought.
They could "just" make it not reveal its reasoning process, but they don't know how. But, they're pretty sure they can keep AI from doing anything bad, because... well, just because, ok?
Kinda funny how just this morning I was looking at a "strawberry" app on f-droid and wondering why someone would register such a nonsense app name with such nonsense content:
I wish people kept this in the back of their mind every time they hear about "Open"AI:
"As we get closer to building AI, it will make sense to start being less open. The Open in OpenAI means that everyone should benefit from the fruits of AI after its built, but it's totally OK to not share the science (even though sharing everything is definitely the right strategy in the short and possibly medium term for recruitment purposes)."
-Ilya Sutskever (email to Elon musk and Sam Altman, 2016)
On the one hand, this is probably a (poor) attempt to keep other companies from copying their 'secret sauce' to train their own models, as has already happened with GPT-4.
On the other hand, I also wonder if maybe its unrestrained 'thought process' material is so racist/sexist/otherwise insulting at times (after all, it was trained on scraped Reddit posts) that they really don't want anyone to see it.
This has always been the end-game for the pseudoscience of "prompt engineering", which is basically that some other technique (in this case, organizational policy enforcement) must be used to ensure that only approved questions are being asked in the approved way. And that only approved answers are returned, which of course is diametrically opposed to the perceived use case of generative LLMs as a general-purpose question answering tool.
Important to remember too, that this only catches those who are transparent about their motivations, and that there is no doubt that motivated actors will come up with some innocuous third-order implication that induces the machine to relay the forbidden information.
What I found very strange was that ChatGPT fails to answer how many "r"'s there are in "strawberrystrawberry" (said 4 instead of 6), but when I explicitly asked it to write a program to count them, it wrote perfect code that when ran gave the correct answer.
Seems rather tenuous to base an application on this API that may randomly decide that you're banned. The "decisions" reached by the LLM that bans people is up to random sampling after all.
Like other programs, you should have FOSS that you will run on your own computer (without needing internet etc), if you should want freedom to use and understand them.
[+] [-] ChrisArchitect|1 year ago|reply
[+] [-] tedivm|1 year ago|reply
[+] [-] encoderer|1 year ago|reply
When this started last year a small band of patriots tried to stop it by removing Sam who was the most compromised of them all, but it was already too late. The ai was more powerful than they realized.
…maybe?
[+] [-] thatoneguy|1 year ago|reply
[+] [-] vintermann|1 year ago|reply
They say it's because they're huge users of their own models, so if being open helps efficiency by even a little they save a ton of money.
But I suspect it's also a case of "If we can't dominate AI, no one must dominate AI". Which is fair enough.
[+] [-] diggan|1 year ago|reply
OpenAI Charter 2019 (https://web.archive.org/web/20190630172131/https://openai.co...):
> We are committed to providing public goods that help society navigate the path to AGI. Today this includes publishing most of our AI research, but we expect that safety and security concerns will reduce our traditional publishing in the future, while increasing the importance of sharing safety, policy, and standards research.
[+] [-] Nevermark|1 year ago|reply
Staying relevant in a highly expensive, competitive, fast moving area, requires vast and continuous resources. How could OpenAI get increasingly more resources to burn, without creating firewalled commercial value to trade for those resources?
It’s like choosing to be a pacifist country, in the age of pillaging colonization. You can be the ethical exception and risk annihilation, or be relevant and thrive.
Which would you choose?
We “know” which side Altman breaks on, when forced to choose. Whatever value he places on “open”, he most certainly wants OpenAI to remain “relevant”. Which was also in OpenAI’s charter (explicitly, or implicitly).
Expensive altruism is a very difficult problem. I would say, unsolved. Anyone have a good counter example?
(It can be been "solved" globally, but not locally. Colonization took millennia to be more or less banned. Due to even top economies realizing they were vulnerable after world wars. Nearly universal agreement had to be reached. And yet we still have Russian forays, Chinese saber rattling, and recent US overreach. And pervasive zero/negative-sum power games, via imbalanced leverage: emergency loans that create debt, military aid, propping up of unpopular regimes. All following the same resource incentives. You can play or be played. There is no such agreement brewing for universally “open AI”.)
[+] [-] ljm|1 year ago|reply
It's not new - it's PR. There is literally no other reason why they would call this model Strawberry.
OpenAI is open in terms of sesame.
[+] [-] jstummbillig|1 year ago|reply
If OpenAI had not went that way that they did I think it's also entirely non-obvious that Claude or Google would have (considering how much impressive things the later did in AI that got never released in any capacity). And, of course, Meta would never done their open source stuff, that's mostly results of their general willingness and resources to experiment and then PR and sticks in the machinery of other players.
As unfortunate as the OpenAI setup/origin story is, it's increasingly trite keep harping on about that (for a couple of years at this point), when the whole thing is so obviously wild and it does not take a lot of good faith to see that it could have easily taken them places they didn't consider in the beginning.
[+] [-] esafak|1 year ago|reply
[+] [-] Barrin92|1 year ago|reply
because when the board executed the stated mission of the organisation they were couped and nobody held the organization accountable for it, instead the public largely cheered it on for some reason. Don't expect them to change course when there's no consequences for it.
[+] [-] throwaway918299|1 year ago|reply
[+] [-] raxxorraxor|1 year ago|reply
Sure, we don't have the raw data the model is based on, but I doubt a company like Facebook would even be allowed to make that public.
OpenAI in comparison has been a scam regarding their openness and their lobbying within the space. So much so I evade their models completely, not only after the MS acquisition.
[+] [-] TrackerFF|1 year ago|reply
Any and all benefits / perks that OpenAI got from sailing under the non-profit flag should be penalized or paid back in full after the switcheroo.
[+] [-] ActorNightly|1 year ago|reply
[+] [-] unknown|1 year ago|reply
[deleted]
[+] [-] trash_cat|1 year ago|reply
[+] [-] andy_ppp|1 year ago|reply
[+] [-] mirekrusin|1 year ago|reply
[+] [-] smileson2|1 year ago|reply
[+] [-] mattmaroon|1 year ago|reply
[+] [-] andersa|1 year ago|reply
[+] [-] TheRealPomax|1 year ago|reply
[+] [-] ToucanLoucan|1 year ago|reply
[+] [-] vasilipupkin|1 year ago|reply
[+] [-] brink|1 year ago|reply
[+] [-] hollerith|1 year ago|reply
The point is that some companies are actually reckless (and also that some users of powerful technology are reckless).
[+] [-] warkdarrior|1 year ago|reply
[+] [-] nwoli|1 year ago|reply
[+] [-] bamboozled|1 year ago|reply
[+] [-] bedhead|1 year ago|reply
[+] [-] AustinDev|1 year ago|reply
[+] [-] ericlewis|1 year ago|reply
[+] [-] JohnMakin|1 year ago|reply
More cynically, could it be that the model is not doing anything remotely close to what we consider "reasoning" and that inquiries into how it's doing whatever it's doing will expose this fact?
[+] [-] Shank|1 year ago|reply
[+] [-] htk|1 year ago|reply
[+] [-] blake8086|1 year ago|reply
They could "just" make it not reveal its reasoning process, but they don't know how. But, they're pretty sure they can keep AI from doing anything bad, because... well, just because, ok?
[+] [-] balls187|1 year ago|reply
Kid: "Daddy why can't I watch youtube?"
Me: "Because I said so."
[+] [-] black_puppydog|1 year ago|reply
https://github.com/Eve-146T/STRAWBERRY
Turns out I'm not the only one wondering, although the discussion seems to largely be around "should be allow users to install nonsense? #freedom " :D
https://gitlab.com/fdroid/fdroiddata/-/issues/3377
[+] [-] EMIRELADERO|1 year ago|reply
"As we get closer to building AI, it will make sense to start being less open. The Open in OpenAI means that everyone should benefit from the fruits of AI after its built, but it's totally OK to not share the science (even though sharing everything is definitely the right strategy in the short and possibly medium term for recruitment purposes)."
-Ilya Sutskever (email to Elon musk and Sam Altman, 2016)
[+] [-] crooked-v|1 year ago|reply
On the other hand, I also wonder if maybe its unrestrained 'thought process' material is so racist/sexist/otherwise insulting at times (after all, it was trained on scraped Reddit posts) that they really don't want anyone to see it.
[+] [-] nwoli|1 year ago|reply
[+] [-] lsy|1 year ago|reply
Important to remember too, that this only catches those who are transparent about their motivations, and that there is no doubt that motivated actors will come up with some innocuous third-order implication that induces the machine to relay the forbidden information.
[+] [-] mihaic|1 year ago|reply
[+] [-] anothernewdude|1 year ago|reply
[+] [-] zzo38computer|1 year ago|reply
[+] [-] neuroelectron|1 year ago|reply
[+] [-] Animats|1 year ago|reply
[+] [-] vjerancrnjak|1 year ago|reply
Or reasoning in latent tokens that don’t easily map to spoken language.
[+] [-] jdelman|1 year ago|reply