top | item 38314299

Ilya Sutskever "at the center" of Altman firing?

416 points| apsec112 | 2 years ago |twitter.com | reply

488 comments

order
[+] OscarTheGrinch|2 years ago|reply
Here's my preferred theory, it's a tale as old as time. Sam Altman, like Icarus, flew too close to Microsoft's giant pot of money. He pivoted the company away from it's founding mission, unleashing the very djinn they originally set out to harness. Turns out there were people at OpenAI who really believed in the original vision.
[+] cardine|2 years ago|reply
I wonder if Sam knew he was going to lose this power struggle and then started working on an exit plan with people loyal to him behind the boards back. The board then finds out and rushes to kick him out ASAP to stop him from using company resources to create a competitor.
[+] drcode|2 years ago|reply
Now that is a theory that actually adds up with the facts (whether true or not)
[+] bob_theslob646|2 years ago|reply
This is the best theory by far. Thank you for sharing that.
[+] eastbound|2 years ago|reply
So they are trying to burn him with the worst possible accusation for a Ceo to try to lessen the inevitable fundraising he’s going to win?
[+] gizmo|2 years ago|reply
Ousting sama and gdb over something as petty as a simple strategy disagreement is totally unprofessional. sama got accused of serious misconduct. Even if he was too eager to commercialize OpenAIs tech that doesn't come close to justifying this circus act.
[+] dragonwriter|2 years ago|reply
> Ousting sama and gdb over something as petty as a simple strategy disagreement

A fundamental inability to align on what, on a fundamental level, the mission set out in the charter of a 501(c)(3) charity means in real world terms is not "a simple strategy disagreement"; moreover, the existence of a factional dispute over that doesn't mean that there weren't serious specific conduct that occurred in the context of that dispute over goals.

[+] jprete|2 years ago|reply
Strategy disagreements are absolutely central reasons to fire executives.
[+] Mistletoe|2 years ago|reply
Don’t you think it’s more likely you don’t know the whole story yet?
[+] MVissers|2 years ago|reply
This is not petty, it’s the integral mission of the company, the reason it was founded, the reason it got investors and the reason that many of the most brilliant scientists in the world work there.

They started as a non-profit ffs.

[+] eigenvalue|2 years ago|reply
I wonder how much of this was the influence of Hinton on his former student, Sutskever. I'm sure Sutskever respects Hinton above basically anyone out there and took Hinton's strong objections seriously.

I think personally think it's a shame because this is all totally inevitable at this point, and if the US loses its leading position here because of this kind intentional hitting of the brakes, then I certainly don't think it makes the world any safer to have China in control of the best AI technology.

[+] strikelaserclaw|2 years ago|reply
why do you think one company will determine whether the us beats china in ai or not ? Like 75% of the authors i read on AI papers are Chinese, that should be far more alarming if you really are afraid of china getting ahead.
[+] GaunterODimm|2 years ago|reply
I don't know if it's bad or good for the long-term interests of the humankind, but right now it feels like a Klaus Fuchs moment.
[+] hooloovoo_zoo|2 years ago|reply
You’re taking Hinton at his word. Maybe he was forced out of Google for doing nothing with LLM tech for half a decade.
[+] 1vuio0pswjnm7|2 years ago|reply
Some weeks ago, I listened to a Bloomberg interview with Altman where he was joined by someone from OpenAI who does the programming. There was obvious disagreement between the two, and the interviewer actually made a joke about it. Perhaps Altman was destined to become the next SBF. Too much misrepresentation to the public, telling people what they want to hear..
[+] mi3law|2 years ago|reply
Can you please try to recall and link to the interview? I'd love to see it.
[+] lfmunoz4|2 years ago|reply
what is the disagreement
[+] convexstrictly|2 years ago|reply
Sutskever: "You can call it (a coup), and I can understand why you chose this word, but I disagree with this. This was the board doing its duty to the mission of the nonprofit, which is to make sure that OpenAI builds AGI that benefits all of humanity."

Scoop: theinformation.com

https://twitter.com/GaryMarcus/status/1725707548106580255

[+] bugglebeetle|2 years ago|reply
Probably the wrong venue for this sentiment, but it is incredible that a principled, remarkably accomplished scientist was able to stop his creation from getting co-opted (for now anyway). If you listen to the No Priors interview with Sutskever, the contrast between him and Altman couldn’t be more clear, but it’s quite rare that the former ever wins out over the latter.
[+] dannykwells|2 years ago|reply
Said in the George Senior voice: And thats why you don’t use a non-profit to do world critical work: politics will always beat true value at a non-profit.
[+] speedgoose|2 years ago|reply
It depends on what you want true value to be.

If true value is monetary value, perhaps it’s true. If true value is scientific value or societal value, well, maybe seeking monetary profits doesn’t align with that.

Disclaimer: I currently work for a not for profit research organisation and I couldn’t care less about making some shareholders more wealthy. If the rumours are true, OpenAI going back to non-profit values and remembering the Open in their name is a good change.

[+] tkgally|2 years ago|reply
I didn’t have much sense of who Ilya Sutskever is or what he thinks, so I searched for a recent interview. Here’s one from the No Priors podcast two weeks ago:

https://www.youtube.com/watch?v=Ft0gTO2K85A

No clear clues about today’s drama, at least as far as I could tell, but still an interesting listen.

[+] Bjorkbat|2 years ago|reply
I have a hard time believing this simply since it seems so ill-conceived. Sure, maybe Sam Altman was being irresponsible and taking risks, but they had an insanely good thing going for them. I'm not saying Sam Altman was responsible for the good times they were having, but you're probably going to bring them to an end by abruptly firing one of the most prominent members of the group, seeing where individual loyalties lie, and pissing off Microsoft by tanking their stock price without giving them any heads up.

I mean, none of this would be possible without insane amounts of capital and world class talent, and they probably just made it a lot harder to acquire both.

But what do I know? If you can convince yourself that you're actually building AGI by making an insanely large LLM, then you can also probably convince yourself of a lot of other dumb ideas too.

[+] SilverSlash|2 years ago|reply
My biggest question is: If Sam Altman starts a new company by next month, and him and Greg Brockman know all details of how GPT4/5 works, then what what will this mean for OpenAI's dominance and lead?
[+] oivey|2 years ago|reply
Wasn’t he more of a business guy while Ilya was the engineer? I really doubt a random VC guy is going to really know much about the specific, crucial details the engineering team knows.
[+] gkanai|2 years ago|reply
Even if sama and gdb raised $10B by early 2024, all of the GPU production capacity is already allocated years out. They'd have to buy some other company's GPUs at insane markups. And that's only on the hardware side.
[+] summerlight|2 years ago|reply
I don't think that specific knowledge means that much. The landscape is changing in a crazily fast pace. 3~4 years ago, Google was way ahead in terms of LLM but has become an underdog after bleeding talents thereafter. It's even worse for that hypothetical new company. It needs at least several months to implement GPT-4 like models and by that time Sam will lose most of his advantages at that moment. And we don't know whether the new company will have enough pool of world class talents to push the technology competitive. To win the competition again, Sam would need more than just some internal knowledge about GPT-4 or whatever models.
[+] dragonwriter|2 years ago|reply
> If Sam Altman starts a new company by next month, and him and Greg Brockman know all details of how GPT4/5 works, then what what will this mean for OpenAI's dominance and lead?

Well, if two top level officers dismissed from top posts at OpenAI go and take OpenAI's confidential internal product information and use it to try and start a new, directly competing, company, it means that OpenAI's lawyers are going to be busy, and the appropriate US Attorney's office might not be too far behind.

[+] ryanSrich|2 years ago|reply
I think Sam and Greg could build something similar to what ChatGPT is today, and maybe even get close to GPT-4, but going beyond that seems like a stretch. Ilya is really the one that’s needed, and clearly he does not see eye to eye with Sam. Another world-class AI researcher at the level of Ilya would have to step in, and I’m not even sure that person exists.
[+] cj|2 years ago|reply
In other words, if SamA did it once, would $50 billion in funding enable him do it a 2nd time?
[+] capableweb|2 years ago|reply
Is OpenAI's current success attributed more to its excellent business and startup management, or does it stem from its superior technology and research that surpasses what others have developed?
[+] anon291|2 years ago|reply
We all know how GPT4/5 work essentially. You can easily run a GPT capable model with a few GPUs in the cloud. The secret sauce is the training data, that openAI owns.
[+] ilrwbwrkhv|2 years ago|reply
Ilya is the center of Open AI. Everyone else is dispensable.
[+] icelancer|2 years ago|reply
Agreed with the former. Not the latter. gdb is no random.
[+] Keyframe|2 years ago|reply
He sure took a different take on disagreeing than what Amodei did before him. Amodei quit and built a big challenger, yet Sutskever opt to oust Altman. Weird all in all. I wouldn't rely my business on such a company.
[+] two_in_one|2 years ago|reply
The main question is what to expect from OpenAI now? No changes very unlikely, that would mean it was just a power grab. So two options remain: more open, more closed. How about slow down and open up? Hope they wouldn't dumb down GPT4. If they allow to use their models to generate training sets (which is prohibited now, AFAIK), that would be nice.
[+] dragonwriter|2 years ago|reply
> So two options remain: more open, more closed.

All kinds of changes are possible that would not, in net, be more open or more closed, either because there primary change would not be about openness, or because it would be more open in some ways and less in others.

So, no, there are more than two options.

[+] gexla|2 years ago|reply
My guess is that the immediate roadmap has already been locked in up to X months out. So, we'll likely never know what the "changes" will be. Short term changes are likely still Altman's work. Long term is the next decision maker.
[+] mvkel|2 years ago|reply
Sam wanted to commercialize stuff to shoot for revenue. Ilya wants to keep pushing for gpt 4.5 and beyond, to hell with the revenue. Ilya won the argument, Sam out.

Hell yeah.

It's not safetyism vs accelerationism.

It's commercialization vs innovation.

[+] 015a|2 years ago|reply
OpenAI's mission statement is "Creating safe AGI that benefits all of humanity".

How does an LLM App Store advance OpenAI toward this goal? Like, even in floaty general terms? You can make an argument that ChatGPT does (build in public, prepare the world for what's coming, gather training data, etc). You can... maybe... make an argument that their API does... but I think that's a lot harder. The App Store product, that's clearly just Sam on auto-pilot, building products and becoming totally unaligned with the nonprofit's goal.

OpenAI got really good at building products based around LLMs, for B2B enterprise customers who could afford it. This is so far away from the goal that, I hope, Ilya can drive them back toward it.

[+] cactusplant7374|2 years ago|reply
Commercialization is innovation. Without it they will end up with a cute toy and a bankrupt company.
[+] thepasswordis|2 years ago|reply
This is genuinely frustrating.

IF the stories are to be believed so far, the board of OpenAI, perhaps one of the most important tech companies in the world right now, was full of people who are openly hostile to the existence of the company.

I don't want AI safety. The people talking about this stuff like it's a terminator movie are nuts.

Strongly believe that this will be a lot like facebook/oculus ousting Palmer Lucky due to his "dangerous" completely mainstream political views shared by half of the country. Palmer, of course, went on to start a company (anduril), which has a much more powerful and direct ability to enact his political will.

SamA isn't going to leave oAI and like...retire. He's the golden boy of golden boys right now. Every company with an interest in AI is I'm sure currently scrambling to figure out how to load a dump truck full of cash and H200s to bribe him to work with them.

[+] denverllc|2 years ago|reply
Pushing too hard and too fast does not seem consistent with lying to the board.
[+] wruza|2 years ago|reply
I read this whole thread and still have no idea what it is about. The only impression it makes is that some HNers are way too dramatic about AI/AGI.
[+] mercymay|2 years ago|reply
Interesting thoughts about the pot of gold vs the internal open source vision. Why did they have to parade Sam's ass up on the DevDay Stage to push the product and company forward though. Couldn't they have canned his ass last week.

I did really liked his speech at DevDay though. It felt kinda like future a I'd be more interested in getting to know. Also, on the pot of gold theory, doesn't he not even take any stock. Chasing GPU's more like. Anyhow, weird move on OpenAI's part.

Anyone got a decent DALLE3 replacement yet. XD

[+] breadwinner|2 years ago|reply
What was Greg Brockman's role at the company? Is he a tech genius like Ilya? Iam trying to understand how much tech talent Open AI is losing.