top | item 38348372

(no title)

sesutton | 2 years ago

Ilya posted this on Twitter:

"I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company."

https://twitter.com/ilyasut/status/1726590052392956028

discuss

order

abraxas|2 years ago

Trying to put the toothpaste back in the tube. I seriously doubt this will work out for him. He has to be the smartest stupid person that the world has seen.

bertil|2 years ago

Ilya is hard to replace, and no one thinks of him as a political animal. He's a researcher first and foremost. I don't think he needs anything more than being contrite for a single decision made during a heated meeting. Sam Altman and the rest of the leadership team haven't got where they are by holding petty grudges.

He doesn't owe us, the public, anything, but I would love to understand his point of view during the whole thing. I really appreciate how he is careful with words and thorough when exposing his reasoning.

guhcampos|2 years ago

I've worked with this type multiple times. Mathematical geniuses with very little grasp of reality, easily manipulated into doing all sorts of dumb mistakes. I don't know if that's the case, but it certainly smells like it.

strikelaserclaw|2 years ago

He seriously underestimated how much rank and file employees want $$$ over an idealistic vision (and sam altman is $$$) but if he backs down now, he will pretty much lose all credibility as a decision maker for the company.

derwiki|2 years ago

Does that include the person who stole self-driving IP from Waymo, set up a company with stolen IP, and tried to sell the company to Uber?

dhruvdh|2 years ago

At least he consistently works towards whatever he currently believes in. Though he could work on consistency in beliefs.

dylan604|2 years ago

That seems rather harsh. We know he’s not stupid, and you’re clearly being emotional. I’d venture he probably made the dumbest possible move a smart person could make while also in a very emotional state. The lessons for all to learn on the table is making big decisions while in an emotional state do not often work out well.

nabla9|2 years ago

So this was completely unnecessary cock-up -- still ongoing. Without Ilya' vote this would not even be a thing. This is really comical, Naked Gun type mess.

Ilya Sutskever is one of the best in the AI research, but everything he and others do related to AI alignment turns into shit without substance.

It makes me wonder if AI alignment is possible even in theory, and if it is, maybe it's a bad idea.

coffeebeqn|2 years ago

We can’t even get people aligned. Thinking we can control a super intelligence seems kind of silly.

z7|2 years ago

>"I deeply regret my participation in the board's actions."

Wasn't he supposed to be the instigator? That makes it sound like he was playing a less active role than claimed.

siva7|2 years ago

It takes a lot of courage to do so after all this.

ShamelessC|2 years ago

I think the word you're looking for is "fear".

tucnak|2 years ago

To be fair, lots of people called this pretty early on, it's just that very few people were paying attention, and instead chose to accommodate the spin, immediately went into "following the money", a.k.a. blaming Microsoft, et al. The most surprising aspect of it all is complete lack of criticism towards US authorities! We were shown this exciting play as old as world— a genius scientist being exploited politically by means of pride and envy.

The brave board of "totally independent" NGO patriots (one of whom is referred to, by insiders, as wielding influence comparable to USAF colonel.[1]) who brand themselves as this new regime that will return OpenAI to its former moral and ethical glory, so the first thing they were forced to do was get rid of the main greedy capitalist Altman; he's obviously the great seducer who brought their blameless organisation down by turning it into this horrible money-making machine. So they were going to put in his place their nominal ideological leader Sutzkever, commonly referred to in various public communications as "true believer". What does he believe in? In the coming of literal superpower, and quite particular one at that; in this case we are talking about AGI. The belief structure here is remarkable interlinked and this can be seen by evaluating side-channel discourse from adjacent "believers", see [2].

Roughly speaking, and based from my experience in this kind of analysis, and please give me some leeway as English is not my native language, what I see is all the infallible markers of operative work; we see security officers, we see their methods of work. If you are a hammer, everything around you looks like a nail. If you are an officer in the Clandestine Service or any of the dozens of sections across counterintelligence function overseeing the IT sector, then you clearly understand that all these AI startups are, in fact, developing weapons & pose a direct threat to the strategic interests slash national security of the United States. The American security apparatus has a word they use to describe such elements: "terrorist." I was taught to look up when assessing actions of the Americans, i.e. most often than not we're expecting noth' but highest level of professionalism, leadership, analytical prowess. I personally struggle to see how running parasitic virtual organisations in the middle of downtown SFO and re-shuffling agent networks in key AI enterprises as blatantly as we had seen over the weekend— is supposed to inspire confidence. Thus, in a tech startup in the middle of San Francisco, where it would seem there shouldn’t be any terrorists, or otherwise ideologues in orange rags, they sit on boards and stage palace coups. Horrible!

I believe that US state-side counterintelligence shouldn't meddle in natural business processes in the US, and instead make their policy on this stuff crystal clear using normal, legal means. Let's put a stop to this soldier mindset where you fear any thing that you can't understand. AI is not a weapon, and AI startups are not some terrorist cells for them to run.

[1]: https://news.ycombinator.com/item?id=38330819

[2]: https://nitter.net/jeremyphoward/status/1725712220955586899