Here's my preferred theory, it's a tale as old as time. Sam Altman, like Icarus, flew too close to Microsoft's giant pot of money. He pivoted the company away from it's founding mission, unleashing the very djinn they originally set out to harness. Turns out there were people at OpenAI who really believed in the original vision.
I wonder if Sam knew he was going to lose this power struggle and then started working on an exit plan with people loyal to him behind the boards back. The board then finds out and rushes to kick him out ASAP to stop him from using company resources to create a competitor.
Ousting sama and gdb over something as petty as a simple strategy disagreement is totally unprofessional. sama got accused of serious misconduct. Even if he was too eager to commercialize OpenAIs tech that doesn't come close to justifying this circus act.
> Ousting sama and gdb over something as petty as a simple strategy disagreement
A fundamental inability to align on what, on a fundamental level, the mission set out in the charter of a 501(c)(3) charity means in real world terms is not "a simple strategy disagreement"; moreover, the existence of a factional dispute over that doesn't mean that there weren't serious specific conduct that occurred in the context of that dispute over goals.
This is not petty, it’s the integral mission of the company, the reason it was founded, the reason it got investors and the reason that many of the most brilliant scientists in the world work there.
I wonder how much of this was the influence of Hinton on his former student, Sutskever. I'm sure Sutskever respects Hinton above basically anyone out there and took Hinton's strong objections seriously.
I think personally think it's a shame because this is all totally inevitable at this point, and if the US loses its leading position here because of this kind intentional hitting of the brakes, then I certainly don't think it makes the world any safer to have China in control of the best AI technology.
why do you think one company will determine whether the us beats china in ai or not ? Like 75% of the authors i read on AI papers are Chinese, that should be far more alarming if you really are afraid of china getting ahead.
Some weeks ago, I listened to a Bloomberg interview with Altman where he was joined by someone from OpenAI who does the programming. There was obvious disagreement between the two, and the interviewer actually made a joke about it. Perhaps Altman was destined to become the next SBF. Too much misrepresentation to the public, telling people what they want to hear..
Sutskever: "You can call it (a coup), and I can understand why you chose this word, but I disagree with this. This was the board doing its duty to the mission of the nonprofit, which is to make sure that OpenAI builds AGI that benefits all of humanity."
Probably the wrong venue for this sentiment, but it is incredible that a principled, remarkably accomplished scientist was able to stop his creation from getting co-opted (for now anyway). If you listen to the No Priors interview with Sutskever, the contrast between him and Altman couldn’t be more clear, but it’s quite rare that the former ever wins out over the latter.
Said in the George Senior voice: And thats why you don’t use a non-profit to do world critical work: politics will always beat true value at a non-profit.
If true value is monetary value, perhaps it’s true. If true value is scientific value or societal value, well, maybe seeking monetary profits doesn’t align with that.
Disclaimer: I currently work for a not for profit research organisation and I couldn’t care less about making some shareholders more wealthy. If the rumours are true, OpenAI going back to non-profit values and remembering the Open in their name is a good change.
I didn’t have much sense of who Ilya Sutskever is or what he thinks, so I searched for a recent interview. Here’s one from the No Priors podcast two weeks ago:
I have a hard time believing this simply since it seems so ill-conceived. Sure, maybe Sam Altman was being irresponsible and taking risks, but they had an insanely good thing going for them. I'm not saying Sam Altman was responsible for the good times they were having, but you're probably going to bring them to an end by abruptly firing one of the most prominent members of the group, seeing where individual loyalties lie, and pissing off Microsoft by tanking their stock price without giving them any heads up.
I mean, none of this would be possible without insane amounts of capital and world class talent, and they probably just made it a lot harder to acquire both.
But what do I know? If you can convince yourself that you're actually building AGI by making an insanely large LLM, then you can also probably convince yourself of a lot of other dumb ideas too.
My biggest question is: If Sam Altman starts a new company by next month, and him and Greg Brockman know all details of how GPT4/5 works, then what what will this mean for OpenAI's dominance and lead?
Wasn’t he more of a business guy while Ilya was the engineer? I really doubt a random VC guy is going to really know much about the specific, crucial details the engineering team knows.
Even if sama and gdb raised $10B by early 2024, all of the GPU production capacity is already allocated years out. They'd have to buy some other company's GPUs at insane markups. And that's only on the hardware side.
I don't think that specific knowledge means that much. The landscape is changing in a crazily fast pace. 3~4 years ago, Google was way ahead in terms of LLM but has become an underdog after bleeding talents thereafter. It's even worse for that hypothetical new company. It needs at least several months to implement GPT-4 like models and by that time Sam will lose most of his advantages at that moment. And we don't know whether the new company will have enough pool of world class talents to push the technology competitive. To win the competition again, Sam would need more than just some internal knowledge about GPT-4 or whatever models.
> If Sam Altman starts a new company by next month, and him and Greg Brockman know all details of how GPT4/5 works, then what what will this mean for OpenAI's dominance and lead?
Well, if two top level officers dismissed from top posts at OpenAI go and take OpenAI's confidential internal product information and use it to try and start a new, directly competing, company, it means that OpenAI's lawyers are going to be busy, and the appropriate US Attorney's office might not be too far behind.
I think Sam and Greg could build something similar to what ChatGPT is today, and maybe even get close to GPT-4, but going beyond that seems like a stretch. Ilya is really the one that’s needed, and clearly he does not see eye to eye with Sam. Another world-class AI researcher at the level of Ilya would have to step in, and I’m not even sure that person exists.
Is OpenAI's current success attributed more to its excellent business and startup management, or does it stem from its superior technology and research that surpasses what others have developed?
We all know how GPT4/5 work essentially. You can easily run a GPT capable model with a few GPUs in the cloud. The secret sauce is the training data, that openAI owns.
He sure took a different take on disagreeing than what Amodei did before him. Amodei quit and built a big challenger, yet Sutskever opt to oust Altman. Weird all in all. I wouldn't rely my business on such a company.
The main question is what to expect from OpenAI now? No changes very unlikely, that would mean it was just a power grab. So two options remain: more open, more closed. How about slow down and open up? Hope they wouldn't dumb down GPT4. If they allow to use their models to generate training sets (which is prohibited now, AFAIK), that would be nice.
All kinds of changes are possible that would not, in net, be more open or more closed, either because there primary change would not be about openness, or because it would be more open in some ways and less in others.
My guess is that the immediate roadmap has already been locked in up to X months out. So, we'll likely never know what the "changes" will be. Short term changes are likely still Altman's work. Long term is the next decision maker.
Sam wanted to commercialize stuff to shoot for revenue. Ilya wants to keep pushing for gpt 4.5 and beyond, to hell with the revenue. Ilya won the argument, Sam out.
OpenAI's mission statement is "Creating safe AGI that benefits all of humanity".
How does an LLM App Store advance OpenAI toward this goal? Like, even in floaty general terms? You can make an argument that ChatGPT does (build in public, prepare the world for what's coming, gather training data, etc). You can... maybe... make an argument that their API does... but I think that's a lot harder. The App Store product, that's clearly just Sam on auto-pilot, building products and becoming totally unaligned with the nonprofit's goal.
OpenAI got really good at building products based around LLMs, for B2B enterprise customers who could afford it. This is so far away from the goal that, I hope, Ilya can drive them back toward it.
IF the stories are to be believed so far, the board of OpenAI, perhaps one of the most important tech companies in the world right now, was full of people who are openly hostile to the existence of the company.
I don't want AI safety. The people talking about this stuff like it's a terminator movie are nuts.
Strongly believe that this will be a lot like facebook/oculus ousting Palmer Lucky due to his "dangerous" completely mainstream political views shared by half of the country. Palmer, of course, went on to start a company (anduril), which has a much more powerful and direct ability to enact his political will.
SamA isn't going to leave oAI and like...retire. He's the golden boy of golden boys right now. Every company with an interest in AI is I'm sure currently scrambling to figure out how to load a dump truck full of cash and H200s to bribe him to work with them.
Interesting thoughts about the pot of gold vs the internal open source vision. Why did they have to parade Sam's ass up on the DevDay Stage to push the product and company forward though. Couldn't they have canned his ass last week.
I did really liked his speech at DevDay though. It felt kinda like future a I'd be more interested in getting to know. Also, on the pot of gold theory, doesn't he not even take any stock. Chasing GPU's more like. Anyhow, weird move on OpenAI's part.
[+] [-] OscarTheGrinch|2 years ago|reply
[+] [-] cardine|2 years ago|reply
[+] [-] toomuchtodo|2 years ago|reply
(pleb who would invest [1], no other association)
[1] https://news.ycombinator.com/item?id=35306929
[+] [-] drcode|2 years ago|reply
[+] [-] bob_theslob646|2 years ago|reply
[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] eastbound|2 years ago|reply
[+] [-] gizmo|2 years ago|reply
[+] [-] dragonwriter|2 years ago|reply
A fundamental inability to align on what, on a fundamental level, the mission set out in the charter of a 501(c)(3) charity means in real world terms is not "a simple strategy disagreement"; moreover, the existence of a factional dispute over that doesn't mean that there weren't serious specific conduct that occurred in the context of that dispute over goals.
[+] [-] jprete|2 years ago|reply
[+] [-] Mistletoe|2 years ago|reply
[+] [-] MVissers|2 years ago|reply
They started as a non-profit ffs.
[+] [-] eigenvalue|2 years ago|reply
I think personally think it's a shame because this is all totally inevitable at this point, and if the US loses its leading position here because of this kind intentional hitting of the brakes, then I certainly don't think it makes the world any safer to have China in control of the best AI technology.
[+] [-] strikelaserclaw|2 years ago|reply
[+] [-] GaunterODimm|2 years ago|reply
[+] [-] hooloovoo_zoo|2 years ago|reply
[+] [-] 1vuio0pswjnm7|2 years ago|reply
[+] [-] mi3law|2 years ago|reply
[+] [-] lfmunoz4|2 years ago|reply
[+] [-] convexstrictly|2 years ago|reply
Scoop: theinformation.com
https://twitter.com/GaryMarcus/status/1725707548106580255
[+] [-] bugglebeetle|2 years ago|reply
[+] [-] dannykwells|2 years ago|reply
[+] [-] speedgoose|2 years ago|reply
If true value is monetary value, perhaps it’s true. If true value is scientific value or societal value, well, maybe seeking monetary profits doesn’t align with that.
Disclaimer: I currently work for a not for profit research organisation and I couldn’t care less about making some shareholders more wealthy. If the rumours are true, OpenAI going back to non-profit values and remembering the Open in their name is a good change.
[+] [-] tkgally|2 years ago|reply
https://www.youtube.com/watch?v=Ft0gTO2K85A
No clear clues about today’s drama, at least as far as I could tell, but still an interesting listen.
[+] [-] Bjorkbat|2 years ago|reply
I mean, none of this would be possible without insane amounts of capital and world class talent, and they probably just made it a lot harder to acquire both.
But what do I know? If you can convince yourself that you're actually building AGI by making an insanely large LLM, then you can also probably convince yourself of a lot of other dumb ideas too.
[+] [-] ilaksh|2 years ago|reply
https://archive.is/tCG3q
Bloomberg: "OpenAI CEO’s Ouster Followed Debates Between Altman, Board"
[+] [-] SilverSlash|2 years ago|reply
[+] [-] oivey|2 years ago|reply
[+] [-] gkanai|2 years ago|reply
[+] [-] summerlight|2 years ago|reply
[+] [-] dragonwriter|2 years ago|reply
Well, if two top level officers dismissed from top posts at OpenAI go and take OpenAI's confidential internal product information and use it to try and start a new, directly competing, company, it means that OpenAI's lawyers are going to be busy, and the appropriate US Attorney's office might not be too far behind.
[+] [-] ryanSrich|2 years ago|reply
[+] [-] cj|2 years ago|reply
[+] [-] capableweb|2 years ago|reply
[+] [-] anon291|2 years ago|reply
[+] [-] ilrwbwrkhv|2 years ago|reply
[+] [-] icelancer|2 years ago|reply
[+] [-] strikelaserclaw|2 years ago|reply
[+] [-] Keyframe|2 years ago|reply
[+] [-] two_in_one|2 years ago|reply
[+] [-] dragonwriter|2 years ago|reply
All kinds of changes are possible that would not, in net, be more open or more closed, either because there primary change would not be about openness, or because it would be more open in some ways and less in others.
So, no, there are more than two options.
[+] [-] gexla|2 years ago|reply
[+] [-] mvkel|2 years ago|reply
Hell yeah.
It's not safetyism vs accelerationism.
It's commercialization vs innovation.
[+] [-] 015a|2 years ago|reply
How does an LLM App Store advance OpenAI toward this goal? Like, even in floaty general terms? You can make an argument that ChatGPT does (build in public, prepare the world for what's coming, gather training data, etc). You can... maybe... make an argument that their API does... but I think that's a lot harder. The App Store product, that's clearly just Sam on auto-pilot, building products and becoming totally unaligned with the nonprofit's goal.
OpenAI got really good at building products based around LLMs, for B2B enterprise customers who could afford it. This is so far away from the goal that, I hope, Ilya can drive them back toward it.
[+] [-] cactusplant7374|2 years ago|reply
[+] [-] thepasswordis|2 years ago|reply
IF the stories are to be believed so far, the board of OpenAI, perhaps one of the most important tech companies in the world right now, was full of people who are openly hostile to the existence of the company.
I don't want AI safety. The people talking about this stuff like it's a terminator movie are nuts.
Strongly believe that this will be a lot like facebook/oculus ousting Palmer Lucky due to his "dangerous" completely mainstream political views shared by half of the country. Palmer, of course, went on to start a company (anduril), which has a much more powerful and direct ability to enact his political will.
SamA isn't going to leave oAI and like...retire. He's the golden boy of golden boys right now. Every company with an interest in AI is I'm sure currently scrambling to figure out how to load a dump truck full of cash and H200s to bribe him to work with them.
[+] [-] denverllc|2 years ago|reply
[+] [-] wruza|2 years ago|reply
[+] [-] mercymay|2 years ago|reply
I did really liked his speech at DevDay though. It felt kinda like future a I'd be more interested in getting to know. Also, on the pot of gold theory, doesn't he not even take any stock. Chasing GPU's more like. Anyhow, weird move on OpenAI's part.
Anyone got a decent DALLE3 replacement yet. XD
[+] [-] breadwinner|2 years ago|reply