top | item 38339370

(no title)

jasonhansel | 2 years ago

This is why, when you claim to be running a non-profit to "benefit humankind," you shouldn't put all your resources into a for-profit subsidiary. Eventually, the for-profit arm, and its investors, will find its nonprofit parent a hindrance, and an insular board of directors won't stand a chance against corporate titans.

discuss

order

silenced_trope|2 years ago

> This is why, when you claim to be running a non-profit to "benefit humankind," you shouldn't put all your resources into a for-profit subsidiary.

To be frank, they need to really spell out what "benefitting mankind" is. How is it measured? Or is it measured? Or is it just "the board says this isn't doing that so it's not doing that"?

It's honestly a silly slogan.

zug_zug|2 years ago

They should define it, sure. Here's what I'd expect this means:

- Not limiting access to a universally profitable technology by making it only accessible to highest bidder (e.g. hire our virtual assistants for 30k a year).

- Making models with a mind to all threats (existential, job replacement, scam uses)

- Potentially open-sourcing models that are deemed safe

So far I genuinely believe they are doing the first two and leaving billions on the table they could get by jacking their price 10x or more.

insanitybit|2 years ago

> "the board says this isn't doing that so it's not doing that"?

I believe that is indeed the case, it is the responsibility of the board to make that call.

Davidzheng|2 years ago

Worst part is their own employees don't care about the non profits value

belter|2 years ago

Because they were promised shares for a future IPO?

yeck|2 years ago

I suspect some do and some don't. Hard to know what the ratio is.

dnissley|2 years ago

Interestingly it was the other way around this time, at least to start...

jasonhansel|2 years ago

This was pretty clearly an attempt by the board to reassert control, which was slowly slipping away as the company became more enmeshed with Microsoft.

hn_throwaway_99|2 years ago

The problem, though, is without the huge commercial and societal success of ChatGPT, the AI Safety camp had no real leverage over the direction of AI advancement worldwide.

I mean, there are tons of think tanks, advocacy organizations, etc. that write lots of AI safety papers that nobody reads. I'm kind of piqued at the OpenAI board not because I think they had the wrong intentions, but because they failed to see that the "perfect is the enemy of the good."

That is, the board should have realistically known that there will be a huge arms race for AI dominance. Some would say that's capitalism - I say that's just human nature. So the board of OpenAI was in a unique position to help guide AI advancement in as safe a manner as possible because they had the most advanced AI system. They may have thought Altman was pushing too hard on the commercial side, but there are a million better ways they could have fought for AI safety without causing the ruckus they did. Now I fear that the "pure AI researchers" on the side of AI Safety within OpenAI (as that is what was being widely reported) will be even more diminished/sidelined. It really feels like this was a colossally sad own goal.

TerrifiedMouse|2 years ago

Better to have a small but independent voice that can grow in influence then being shackled by commercial interest and lose your integrity - e.g. How many people actually gives a shit what Google has to say about internet governance?

lmm|2 years ago

> That is, the board should have realistically known that there will be a huge arms race for AI dominance. Some would say that's capitalism - I say that's just human nature. So the board of OpenAI was in a unique position to help guide AI advancement in as safe a manner as possible because they had the most advanced AI system. They may have thought Altman was pushing too hard on the commercial side, but there are a million better ways they could have fought for AI safety without causing the ruckus they did.

If the board were to have any influence they had to be able to do this. Whether this was the right time and the right issue to play their trump card I don't know - we still don't know what exactly happened - but I have a lot more respect for a group willing to take their shot than one that is so worried about losing their influence that they can never use it.

keerthiko|2 years ago

I agree. I think a significantly better approach would have been to vote for the elaboration of a "checks and balances" structure to OpenAI as it grew in capabilities and influence.

Internal to the entire OpenAI org, sounds like all we had was just the for-profit arm <-> board of directors. Externally, you can add investors and public opinion (basically defaults to siding with the for-profit arm).

I wish they worked towards something closer to a functional democracy (so not the US or UK), with a judicial system (presumably the board), a congress (non-existent), and something like a triumvirate (presumably the for-profit C-suite). Given their original mission, it would be important to keep the incentives for all 3 separate, except for "safe AI that benefits humanity".

The truly hard to solve (read: impossible?) part is keeping the investors (external) from having an outsize say over any specific branch. If a third internal branch could exist that was designed to offset the influence of investors, that might have resulted in closer to the right balance.

yterdy|2 years ago

I've yet to hear what, exactly, underlies the sneering smugness over the notion that the board is going to get their asses handed to them. AFAICT, you have a non-profit with the power to do what they want, in this case, and "corporate titans" doing the "cornered cat" thing.

nostrademons|2 years ago

The most logical outcome would be for Microsoft to buy the for-profit OpenAI entity off its non-profit parent for $50B or some other exorbitant sum. They have the money, this would give the non-profit researchers enough play money that they can keep chasing AGI indefinitely, all the employees who joined the for-profit entity chasing a big exit could see their payday, and the new corporate parent could do what they want with the tech, including deeply integrate it within their systems without fear of competing usages.

Extra points if Google were to sweep in and buy OpenAI. I think Sundar is probably too sleepy to manage it, but this would be a coup of epic proportions. They could replace their own lackluster GenAI efforts, lock out Microsoft and Bing from ChatGPT (or if contractually unable to, enshittify the product until nobody cares), and ensure their continued AI dominance. The time to do it is now, when the OpenAI board is down to 4 people, the current leader of whom has prior Google ties, and their interest is to play with AI as an academic curiosity, which a fat warchest would accomplish. Plus if the current board wants to slow down AI progress, one sure way to accomplish that would be to sell it to Google.

rvnx|2 years ago

The new investors entered at a ~90B USD valuation for info.

Microsoft I don't think they need it:

Assuming they have the whole 90B USD to spend: it doesn't really make sense;

they have full access to the source-code of OpenAI and datasets (because the whole training and runtime runs on their servers already).

They could poach employees and make them better offers, and get away with a much more efficient cost-basis, + increase employee retention (whereas OpenAI employees may just become so rich after a buy-out that they could be tempted to leave).

They can replicate the tech internally without any doubt and without OpenAI.

Google is in deep trouble for now, perhaps they will recover with Gemini. In theory they could buy OpenAI but it seems out-of-character for them. They have strong internal political conflicts within Google, and technically it would be a nightmare to merge the infrastructure+code within their /google3 codebase and other Google-only dependencies soup.

pkaye|2 years ago

Doesn't Mozilla run the same way with for-profit under a non-profit?

tsunamifury|2 years ago

How’d that work out for them?

phpisthebest|2 years ago

See Mozilla for proof of this statement

m3kw9|2 years ago

They quickly realize without money you really can’t do as much.

chubot|2 years ago

Yeah, but also remember that Altman and Musk started the non-profit to begin with (back when both their reputations were much different). They were explicitly concerned about Google's dominance in AI. It was always competitive, and always about power.

Wikipedia gives these names:

In December 2015, Sam Altman, Greg Brockman, Reid Hoffman, Jessica Livingston, Peter Thiel, Elon Musk, Amazon Web Services (AWS), Infosys, and YC Research announced[15] the formation of OpenAI and pledged over $1 billion to the venture

Do any of those people sound like their day job was running non-profits? Had any of them EVER worked at a non-profit?

---

So a pretty straightforward reading is that the business/profit-minded guys started the non-profit to lure the idealistic researchers in.

The non-profit thing was a feel-good ruse, a recruiting tool. Sutskever could have had any job he wanted at that point, after his breakthroughs in the field. He also didn't have to work, after his 3-person company was acquired by Google for $40M+.

I'm sure it's more nuanced than that, but it's silly to say that there was an idealistic and pure non-profit, and some business guys came in and ruined it. The motive was there all along.

Not to say I wouldn't have been fooled (I mean certainly employees got many benefits, which made it worth their time). But in retrospect it's naive to accept their help with funding and connections (e.g. OpenAI's first office was Stripe's office), and not think they wouldn't get paid back later.

VCs are very good at understanding the long game. Peter Thiel knows that most of the profits come after 10-15 years.

Altman can take no equity in OpenAI, because he's playing the long game. He knows it's just "physics" that he will get paid back later (and that seems to have already happened)

---

Anybody who's worked at a startup that became a successful company has seen this split. The early employees create a ton of value, but that value is only fully captured 10+ years down the road.

And when there are tens or hundreds of billions of dollars of value created, the hawks will circle.

It definitely happened at say Google. Early employees didn't capture the value they generated, while later employees rode the wave of the early success. (I was a middle-ish employee, neither early nor late)

So basically the early OpenAI employees created a ton of value, but they have no mechanism to capture the value, or perhaps control it in order to "benefit humanity".

From here on out, it's politics and money -- you can see that with the support of Microsoft's CEO, OpenAI investors, many peer CEOs from YC, weird laudatory tweets by Eric Schmidt, etc.

The awkward, poorly executed firing of the CEO seems like an obvious symptom of that. It's a last-ditch effort for control, when it's become obvious that the game is unfolding according to the normal rules of capitalism.

(Note: I'm not against making a profit, or non-profits. Just saying that the whole organizational structure was fishy/dishonest to begin with, and in retrospect it shouldn't be surprising it turned out this way.)

basiccalendar74|2 years ago

this makes a lot of sense. I wonder if board's goal in firing Sam was to make everyone (govt., general public,) understand for-profit motives of Sam and most employees at this point.

Either Sam forms a new company with mass exodus of employees, or outside pressure changes structure of OpenAI towards a clear for-profit vision. In both cases, there will be no confusion going forward whether OpenAI/Sam have become a profit-chasing startup.

Chasing profits is not bad in itself, but doing it under the guise of a non-profit organization is.

turtleyacht|2 years ago

Thank-you. Not a lot of times remind me of this heady stuff, but the comment did. So here goes.

---

A Nobel Prize was awarded to Ilya Prigogine in 1977 for his contributions in irreversible thermodynamics. At his award speech in Stockholm, Ilya showed a practical application of his thesis.

He derived that, in times of superstability, lost trust is directly reversible by removing the cause of that lost trust.

He went on to show that in disturbed times, lost trust becomes irreversible. That is, in unstable periods, management can remove the cause of trust lost--and nothing happens.

Since his thesis is based on mathematical physics, it occupies the same niche of certainty as the law of gravity. Ignore it at your peril.

-- Design for Prevention (2010)