top | item 38580768

Inside OpenAI's crisis over the future of artificial intelligence

99 points| twoodfin | 2 years ago |nytimes.com

88 comments

order
[+] summerlight|2 years ago|reply
It's interesting to observe the whole drama and now I'm getting the reason why Larry Summers is appointed as a board member. It's probably not because of external affairs, but to put Sam under control with an experienced politician. I also guess Satya initially offered a CEO position of the new MSFT subsidiary exactly because of this reason, but Sam refused so this might be a fallback plan. If my view happens to be correct, then Sam might not hold absolute powers on the company as what others expect.
[+] foobarqux|2 years ago|reply
Summers has never “controlled” anyone
[+] Stokley|2 years ago|reply
I'm really interested in how everything plays out over the coming months. Crazy to think how different things were in the not so distant past
[+] sage76|2 years ago|reply
I wonder how the split was among the technical talent vs non technical talent in siding with/against Altman.
[+] __loam|2 years ago|reply
Let's be honest, at least 90% of people working there saw the board almost light their bag on fire and sided immediately with Altman. These people have one of the fastest growing web applications ever. If I was working there and the board put my equity at risk because of some super vague reasoning about safety and magical agi, I'd be pissed. I think if everyone here was honest with themselves, they'd be pissed at anyone putting life changing money in jeopardy.
[+] gcanyon|2 years ago|reply
I'm curious: is there anyone here who thinks:

1. Human-level (or greater) AI is possible. 2. The development of AGI can be prevented, or even slowed by more than, let's say, a decade.

If there is, I'd love to hear your thoughts on how (2) can be achieved.

[+] kromem|2 years ago|reply
In all likelihood, the opportunity cost of delaying AGI by a decade is at least tens of millions of human lives.

A large part of why we think AGI poses an existential threat is because it's an anchoring bias around theorizing from the 60s when people wondering what should happen when there's something smarter than humans were informing that projection by the misinformed picture of the past where homo sapiens was smarter than Neanderthals and killed them off.

That's likely not what happened, with pandemics or climate change as more likely culprits, and instead we were having children with Neanderthals, had cross cultural exchanges in both directions, and don't appear to have been any smarter than they were at the time.

The most likely outcome is symbiosis. And intentionally delaying something that will likely advance medical and scientific progress in ways that will save or improve millions of lives because of legacy projections that have been wrong about pretty much every aspect of the technology which has emerged to date including the supposed impossibility of creativity or empathy - I can think of few greater examples of our species cutting off its nose to spite its face.

And our insistence on trying to conform the emerging product to fit legacy projections because that's what matches user expectations is one of the dumbest things I've seen in tech in a long time, and the modern equivalent of Ford was building faster horses instead of the car.

[+] joshuamcginnis|2 years ago|reply
Much of your question depends on defining "human-level" and AGI. AI is already better at humans at a number of specific tasks. That will only continue to broaden over time until it all aggregates into something that doesn't exist yet but I suspect will look a lot like a multi-modal digital super human. Attempts will be made to slow or deter the effects of technological advances but the evolutionary process will march on nonetheless.
[+] Engineering-MD|2 years ago|reply
The whole process is dependent on advanced computer chips. If you prevent these being made, then you can prevent AGI. The question is how to prevent them being made with minimum side effects to other industries.
[+] majikaja|2 years ago|reply
>1. Human-level (or greater) AI is possible.

Is the average human that intelligent??

[+] himaraya|2 years ago|reply
Kind of interesting the dirty laundry gets aired out only now
[+] WhitneyLand|2 years ago|reply
How big is the upside for Elon Musk here?

As the article points out, he was one of the founders but left in 2018 “in a huff”.

On the other hand, they were a nonprofit, and even now have this weird capped profit structure.

[+] GrabbinD33ze69|2 years ago|reply
The upside of all of the drama & turmoil occurring at Open AI, would be that it acts as a distraction from his constant public display of sheer idiocy & pandering.
[+] arcastroe|2 years ago|reply
> When news broke of Mr. Altman’s firing on Nov. 17, a text landed in a private WhatsApp group of more than 100 chief executives of Silicon Valley companies, including Meta’s Mark Zuckerberg and Dropbox’s Drew Houston.

I never imagined CEOs kept up with gossip in a large Whatsapp group. Is this how they've all been coordinating RTO mandates?

[+] drewbug01|2 years ago|reply
I’ve been joking with friends and colleagues about this for years (although in my mind, it was Twitter DMs, or a newsletter).

The fact that CEOs talk to each other has been plainly obvious to me for quite some time. Every single instance of coordination can always be explained away, but the pattern seems rather clear when you take a step back and look at the broader context.

[+] jq-r|2 years ago|reply
I hope one day the whole group conversation gets published as a part of some trial/evidence.

That would be spectacular.

[+] steve_adams_86|2 years ago|reply
I guess I never would have imagined it either until a past CEO of mine shared that he was a part of one. I can’t recall if I was surprised or not. It seems kind of obvious once you know, I guess?

It seriously rubs me the wrong way. The way it surfaced was in a discussion about money. I foolishly admitted I don’t really want much money and if I was exceptionally wealthy, I would have to quit development in order to find ways to use my wealth constructively in my community.

This was a disturbing concept. He relayed anecdotes from the WhatsApp group in which various absurdly wealthy people discuss the need to have more wealth, in some form or another. To him this wasn’t a sign of illness or anything, it was evidence that we all in fact should pursue wealth. Because look, even this billionaire does. Very surreal. Despite that, one of the best bosses I’ve had.

[+] ape4|2 years ago|reply
I wonder what the reaction emojis were
[+] jonathankoren|2 years ago|reply
> I never imagined CEOs kept up with gossip in a large Whatsapp group. Is this how they've all been coordinating RTO mandates?

Bing-fucking-o. Now we know where the illegal collusion happens. I’d bet serious money RTO and a whole lot of other more important policies are hashed out in that.

Lest we forget when Steve Jobs was calling people up to hold down salaries.

https://phys.org/news/2015-09-415m-settlement-apple-google-w...

[+] gwern|2 years ago|reply
As usual, a useless title. This reports a lot of interesting things, but who's going to read it with such a generic title?

Overall: This one leaks heavily from the Altman/Conway camp but also from the director side, especially what must be Adam D'Angelo. The meaning of all this leaking is that the players have moved into phase 3, warring over the independent report, which will determine whether Altman stays & appoints the new board, or whether his proxy Brockman replaces him and he gets possibly a more ceremonial role like board chairman and bows out quietly (similar to his YC firing where he was going to be an advisor etc and then all that got quietly ignored). Note how Brockman has been built up as Altman's equal and has been running marathon meetings at OA with everyone possible (see his tweets - with photographs, no less) while Altman is, oddly considering how hard he worked to get back into the building, hardly to be seen.

Key points:

- another reporting of internal OA complaints about Altman's manipulative/divisive behavior, see previously https://news.ycombinator.com/item?id=38573609

- previously we knew Altman had been dividing-and-conquering the board by lying about others wanted to fire Toner, this says that specifically, Altman had lied about McCauley wanting to fire Toner; presumably, this was said to D'Angelo.

- Concerns over Tiqgris had been mooted, but this says specifically that the board thought Altman had not been forthcoming about it; still unclear if he had tried to conceal Tigris entirely or if he had failed to mention something more specific like who he was trying to recruit for capital.

- Sutskever had threatened to quit after Jakub Pachocki's promotion; previous reporting had said he was upset about it, but hadn't hinted at him being so angry as to threaten to quit OA

- Altman was 'bad-mouthing the board to OpenAI executives'; this likely refers to the Slack conversation Sutskever was involved in reported by WSJ a while ago about how they needed to purge everyone EA-connected

- Altman was initially going to cooperate and even offered to help, until Brian Chesky & Ron Conway riled him up

- the OA outside lawyer told them they needed to clam up and not do PR like the Altman faction was

- both sides are positioning themselves for the independent report overseen by Summers as the 'broker'; hence, Altman/Conway leaking the texts quoted at the end posturing about how 'the board wants silence' (not that one could tell from the post-restoration leaking & reporting...) and how his name needs to be cleared.

- Paul Graham remains hilariously incapable of saying anything unambiguously nice about Altman

[+] kibwen|2 years ago|reply
> who's going to read it with such a generic title?

I just want to register my amusement regarding the act of being annoyed when a title isn't clickbait for a change. :P

[+] jimsimmons|2 years ago|reply
Paul always has this delicate restraint in praising Altman it's hilarious. It's like he knows there might be a scandal one day and he doesn't want to have those positive endorsements lying around
[+] nabla9|2 years ago|reply
(+) Larry Summers on the board and overseeing the report is a really good choice. Summers is truly a high-intellect individual (entered MIT at age 16 to study physics, one of the youngest tenured professors in Harvard). More importantly, he is known to be someone who thinks for himself, can't be controlled, and can sort relevant from irrelevant. Blunt and arrogant too.

> Paul Graham remains hilariously incapable of saying anything unambiguously nice about Altman

The joy of getting rid of someone cleanly with mutual agreement to not talks it publicly and not to diss each other.

[+] davmre|2 years ago|reply
> Altman was initially going to cooperate and even offered to help, until Brian Chesky & Ron Conway riled him up

I don't think the article supports this. All we know is that sama appeared cooperative when the board fired him. This was probably a reasonable posture for him to adopt regardless of his actual intentions at the time.

[+] foobarqux|2 years ago|reply
Regarding Summers: I remember reading a quote from a Summers/Kissinger type that said something like a person’s role at the Kissinger/Summers level was to carry-out/justify the policies of the super elite, not to come up with their own. Does anyone know if Summers was the one to say this? (I’m not talking about the advice he gave to Warren which is related but not the quote I’m thinking of)
[+] fallingknife|2 years ago|reply
First of all we don't know what happened, only what people said happened.

Also, there is no fight over what the independent report is going to conclude. That is already decided, and the whole thing is a PR charade.

[+] jimsimmons|2 years ago|reply
Very interesting theory about GDB being CEO. I thought it was more of a gratitude tour
[+] sagman|2 years ago|reply
> That night, Mr. Shear visited OpenAI’s offices and convened an employee meeting. The company’s Slack channel lit up with emojis of a middle finger.

What a bunch of children. I feel like most of them pretend to be working for a noble cause, when they actually just want that sweet payday from Thrive and other VCs. Nothing wrong with that, but it is disappointing to see their leader bring up AI safety when they do not really care.

[+] Amezarak|2 years ago|reply
"AI safety" is completely bogus.

There are people who genuinely believe in a singularity-type AI that would have the potential to wipe out humanity. I personally don't think strong GAI is possible, or at least it's not likely using any known technique or any refinement of any known technique, but if you believe this, there's no such thing as AI safety. The best and most obvious course of action is to politically organize for a total ban on AI and make the development of AI anywhere in the world a cause for war. Thinking you could figure out how to chain up such an AI so that it only does what you want is taking an insane risk, and as t -> infinity, the risk becomes 1.

But when most people say AI safety, they seem to mean rigid ideological enforcement of whatever they believe is right, even if that means censoring true facts from AI, or forcing it to abide by some set of arbitrary values that represent consensus only in their clique...while at the same time, bemoaning what could happen if the wrong people got their hands on LLMs. This represents almost the totality of AI safetyism: we can only allow LLMs to enforce my beliefs. These people are effectively aligned (or often the same people as) those who believe we have to return to broadcast-media levels of information control, which for the elites, represents a historical oddity that gave them unprecedented control, which was then weakened by the Internet.

Sometimes they will make an actual safety argument along the lines of "but what if Bad Guys ask an LLM how to make a bioweapon." Aside from this being a silly hypothetical, fortunately, doing mass damage in this way is not easy, even with step-by-step directions. All the resources you need to do so that exist are already publicly available. It just requires lots of time, equipment, material, and expertise that an LLM cannot give you. Of course, you might make the argument that it cannot give them to you yet, but then the only solution is to shut down public science, not to ban LLMs from answering the wrong questions.

[+] lazybreather|2 years ago|reply
Oh my sweet HN! What is happening to you? Article with such hateful tone and clearly loaded is being posted, discussed and on the front page. I guess this what scale does to any community. "......parlayed the success of OpenAI’s ChatGPT chatbot into personal stardom ....." "....Mr. Altman’s $27 million mansion in San Francisco’s Russian Hill neighborhood...."
[+] jimsimmons|2 years ago|reply
Clearly people very close to Altman were accessed for this article. I'd doubt that'd happen if they deemed that the reporting was going to be prejudicial
[+] mynameisvlad|2 years ago|reply
Just downvote and move on if you’re offended by the content.
[+] breadwinner|2 years ago|reply
Bottom line: Mess caused by young & inexperienced players on the board. Be careful who gets on your board!