top | item 39121867

(no title)

mgreg | 2 years ago

Unsurprising but disappointing none-the-less. Let’s just try to learn from it.

It’s popular in the AI space to claim altruism and openness; OpenAI, Anthropic and xAI (the new Musk one) all have a funky governance structure because they want to be a public good. The challenge is once any of these (or others) start to gain enough traction that they are seen as having a good chance at reaping billions in profits things change.

And it’s not just AI companies and this isn’t new. This is art of human nature and will always be.

We should be putting more emphasis and attention on truly open AI models (open training data, training source code & hyperparameters, model source code, weights) so the benefits of AI accrue to the public and not just a few companies.

[edit - eliminated specific company mentions]

discuss

order

ertgbnm|2 years ago

The botched firing of Sam Altman proves that fancy governance structures are little more than paper shields against the market.

Whatever has been written can be unwritten and if that fails, just start a new company with the same employees.

ben_w|2 years ago

> The botched firing of Sam Altman proves that fancy governance structures are little more than paper shields against the market.

The things I saw didn't make any sense, so I can't say that it proves anything other than the existence of hidden information.

The board fired him, and they chose a replacement. The replacement sided with Altman. This repeated several times. The board was (reportedly) OK with closing down the entire business on the grounds of their charter.

Why didn't the board do that? And why did their chosen replacements, not individually but all of them in sequence, side with the person they fired?

My only guess is the board was blackmailed. It's just a guess — it's the only thing I can think of that fits the facts, and I'm well aware that this may be a failure of imagination on my part, and want to emphasise that this shouldn't be construed as anything more than a low-confidence guess by someone who has only seen the same news as everyone else.

AndrewKemendo|2 years ago

Because at some point, the plurality of employees do not subordinate their personal desires to the organizational desires.

The only organizations for which that is a persistent requirement are typically things like priest hoods

boringuser2|2 years ago

I wonder if your lesson is "Sam Altman should/would have been fired but for market forces".

samstave|2 years ago

>>>"The botched firing of Sam Altman proves that fancy governance structures are little more than paper shields against the _market_."

-

...Or rather ( $ ) . ( $ ) immediate hindsight eyes...

gooseus|2 years ago

"Cease quoting bylaws to those of us with yachts"

corethree|2 years ago

It was botched because the public was too stupid to see how much of a snake Sam Altman is. He was fired from Y-combinator and people were still Universally supporting him on HN.

IF people hated him he would've been dropped. Microsoft and everybody else only moved forward because they knew they wouldn't get public backlash. Seems everyone fails to remember their own mob mentality. People here on HN were practically worshipping the guy.

Statistically most people commenting here right now were NOT supporting his firing and now you've all flipped and are saying stuff like: "yeah he should've been fired." Seriously?

I don't blame the governance. They tried their best. It's the public that screwed up. (Very likely to be YOU, dear reader)

Without public support the leadership literally only had enemies at every angle and they have nowhere to turn. Imagine what that must have felt like for those members of the board. Powerful corporations threatening aspects of their livelihoods (of course this happened, you can't force a leader to voluntarily step down without some form of a serious threat) and the entire world hating on them for doing such a "stupid" move as everyone thought of it at the time.

I'm ashamed at humanity. I look at this thread and I'm seriously thinking, what in the fuck? It's like everyone forgot what they were doing. And they still twist it to blame them as if they weren't "powerful" enough to stop it. Are you kidding?

x0x0|2 years ago

I'm not sure why you attribute that as a shield against the market. That seemed much more like an open employee revolt. And I can't think of a governance structure that is going to stop 90% of your employees from saying, for example, we work for Sam Altman, not you idiots...

wolverine876|2 years ago

> And it’s not just AI companies and this isn’t new. This is art of human nature and will always be.

Blaming "human nature" is an excuse that is popular among egomaniacs, but on even brief inspection it is transparently thin: Human nature includes plenty of non-profits and people who did great things for humanity for little or no gain (scientists, soldiers, public servants, even some sofware developers). It also includes people who have done horrible things.

Human nature really is that we have a choice. It's both a very old and fundamental part of human nature:

  And the serpent said unto the woman, Ye shall not surely die:

  For God doth know that in the day ye eat thereof, then your
    eyes shall be opened, and ye shall be as gods, knowing
    good and evil.

  And when the woman saw that the tree was good for food, and
    that it was pleasant to the eyes, and a tree to be
    desired to make one wise, she took of the fruit thereof,
    and did eat, and gave also unto her husband with her;
    and he did eat.

  And the eyes of them both were opened, and they knew that
    they were naked; and they sewed fig leaves together, and
    made themselves aprons.
That's the Tree of the Knowledge of Good and Evil, of course (Genesis 3). We know good and evil, we make our own choices; no blaming God or some outside force. If you do evil, it was your choice.

mlrtime|2 years ago

Since you seem to have this figured out and it's not just human nature, Care to list everything that is good and everything that is evil?

Back to reality on this topic. There is nothing wrong with OpenAI employees voting to keep the company for profit and maximizing their own personal gains.

I don't see how this can be anything close to "Evil".

ben_w|2 years ago

Tangential topic, but I've been thinking about that part of the bible recently.

It makes no sense to me.

I don't mean that God, supposedly all good and all knowing, didn't know about the serpent and intervene at the time — despite Christian theology being monotheist, I think the original tales were polytheistic, and the deity of the Garden of Eden was never meant to have those attributes[0].

I mean why was it appropriate to punish them for something they did in a state of naivety, and which was, within the logic of the story, both prior to and the direct cause of gaining knowledge of the difference between good and evil? It's like your parents suing you to recover the cost of sending you to school.

[0] Further tangent: if they're al the same god, why did it take 6 days to make the world (well, cosmos) and all the things in it, but 40 days to flood the Earth to cleanse it of all human and animal life except for the ark? It's fine if they're different gods, a creator deity with all that cosmic power doesn't need to care so much about small details like good and evil, and a smaller and more personal god that does care about good and evil doesn't need to have such cosmic power.

rkagerer|2 years ago

open training data, training source code & hyperparameters, model source code, weights

I'm not an FSF hippie or anything (meant that in an endearing way), but even I know if it's missing these it can't be called "open source" in the first place.

nomel|2 years ago

I don't think the weights are required. They're an artifact created from burning vast amounts of money. Providing the source/methods that would allow one, with the same amount of money, to reproduce those weights, should still be considered open source. Similarly, you can still have open source software without a compiled binary, and, you can have open source hardware, without providing the actual, costly, hardware.

zemo|2 years ago

> OpenAI, Anthropic and xAI (the new Musk one) all have a funky governance structure because they want to be a public good

do they actually want to be a public good or do they want you to think they want to be a public good?

zx8080|2 years ago

What? It's business. They want to make money for investors and owners. Whatever helps this main goal.

ToucanLoucan|2 years ago

The problem is research into AI requires investment and investors (by and large) expect returns, and, the technology in this case actually working is currently in the midst of it's new-and-shiny-hype-stage. You can say these organizations started altruistic; frankly I think that's dubious at best given basically all that have had the opportunity to turn their "research project" into a revenue generator have done; but much like social media and cloud infrastructure, any open source or truly non-profit competitor to these entities will see limited investment by others. And that's a problem, because the silicon these all run on can only be bought with dollars, not good vibes.

It's honestly kind of frustrating to me how the tech space continues to just excuse this. Every major new technology since I've been paying attention (2004 ish?) has gone this exact same way. Someone builds some cool new thing, then dillholes with money invest in it, it becomes a product, it becomes enshittified, and people bemoan that process while looking for new shiny things. Like, I'm all for new shiny things, but what if we just stopped letting the rest become enshittified?

As much as people have told me all my life that the profit motive makes companies compete to deliver the best products, I don't know that I've ever actually seen that pan out in my fucking life. What it does is it flattens all products offered in a given market to whatever set of often highly arbitrary and random aspects all the competitors seem to think is the most important. For an example, look at short form video, which started with Vine, was perfected by TikTok, and is now being hamfisted into Instagram, Facebook, Twitter, YouTube despite not really making any sense in those contexts. But the "market" decided that short form video is important, therefore everything must now have it even if it makes no sense in the larger product.

pdonis|2 years ago

> As much as people have told me all my life that the profit motive makes companies compete to deliver the best products, I don't know that I've ever actually seen that pan out

Yes, you have; you're just misidentifying the product. Google, Facebook, Twitter, etc. do not make products for you and I, their users. We're just a side effect. Their actual products are advertising access to your eyeballs, and big data. Those products are highly optimized to serve their actual customers--which aren't you and I. The profit motive is working just fine. It's just that you and I aren't the customers; we're third parties who get hit by the negative externalities.

The missing piece of the "profit motive" rhetoric has always been that, like any human motivation, it needs an underlying social context that sets reasonable boundaries in order to work. One of those reasonable boundaries used to be that your users should be your customers; users should not be an externality. Unfortunately big tech has now either forgotten or wilfully ignored that boundary.

bane|2 years ago

The governance structure is advertising. "trust us, look we're trustable" is intended to convince people to use what they are building.

But the structure is expensive and risky, tossing it aside once traction is made is the plan.

skottenborg|2 years ago

Given this, it's interesting that an established company like Meta releases open source models. Just the other day Zuck mentioned an upcoming open source model being trained with a tremendous amount of GPU-power.

willvarfar|2 years ago

Meta is trying to devalue its upstart competitor openai. When openai was so far ahead in public perception, FB starts gaving away what they had spent oodles of money building in order to lessen openai's hype and stop their investors believing that the next great thing was elsewhere?

dotnet00|2 years ago

I think that's just them trying to limit what the others can get away with, as well as limiting the competition they have to deal with because the open source models end up as a baseline.

OpenAI etc have to reign in how much they abuse their lead because after some price point it becomes better to take the quality hit and use an open source model. Similarly, new competitors are forced to treat the Facebook models as a baseline, which increases their costs.

mastax|2 years ago

Commoditize your complement. I guess Meta sees AI more as something they use than something they offer.

insane_dreamer|2 years ago

it was the only way for Meta to even get into the conversation; if they had captured the mindshare like GPT did, you can be sure they wouldn't have open-sourced it

yieldcrv|2 years ago

OpenAI raised $130 million when it was only a non profit and had difficulty doing more, despite the stacked deck and start studded staff and same goal that would value participation units at $100bn

that’s the real lesson here. we can want to redo OpenAI all we want but the people will not use their discretion in funding it until they can make a return

insane_dreamer|2 years ago

yeah, this was ultimately the problem

it turned out that AI research required $ billions to run the LLMs, something that was not originally anticipated; and the only way to get that kind of money is to sell your future (and your soul) to investors who want to see a substantial return

insane_dreamer|2 years ago

> And it’s not just AI companies and this isn’t new. This is art of human nature and will always be.

To some extent but it's much more egregious in companies like OpenAI where they promoted themselves as being founded for a specific purpose which they then did a complete U-turn on.

It's more like a non-profit saying they're being founded to provide free water to children in Africa and then it turns out that they're actually selling the water to the children. (Yeah, scamming is maybe part of human nature too, but thankfully most people don't resort to that.)

caycep|2 years ago

I guess that is the question - how to differentiate between "open-claiming" companies like openAI vs. "truer grass roots" organizations like Debian, python, linux kernel, etc? At least from the view point of, say, someone who is just coming smack into the field and without the benefit of years of watching the evolution/governance of each organization?

Barrin92|2 years ago

>how to differentiate between "open-claiming" companies like openAI vs. "truer grass roots" organizations

Honestly? The people. Calculate the distance to (American) venture capital and the chance they go bad is the inverse of that. Linus, Guido, Ian, Jean-Baptiste Kempf of VLC fame, who turned down seven figures, what they all have in common is that they're not in that orbit and had their roots in academia and open source or free software.

Cacti|2 years ago

This is precisely what most safety researchers were asking for in 2016 when openai was recruiting, and why many didn’t go to openai. Like, there’s a lot of other security and safety researchers out there. The OpenAI types draw from an actually fairly narrow self-selecting group within there.

AndrewKemendo|2 years ago

The public can’t benefit from any of this stuff because they’re not in the infrastructure loop to actually assign value.

The only way the public would benefit from these organizations is if the public are owners and there isn’t really a mechanism for that here anywhere.

mikeg8|2 years ago

I strongly disagree, and think this statement is basically completely wrong. I am part of the public and I'm benefitting tremendously from the product openAI has built. I would be very unhappy if my access to chatgpt or copilot was suddenly restricted. I extract tons of value (perceieved) from their product, and they receive some value in return from my subscription. Its a win-win.

digging|2 years ago

It isn't just money, though. Every leading AI lab is also terrified that another lab will beat them to [impossible-to-specify threshold for AGI], which provides additional incentive to keep their research secret.

JohnFen|2 years ago

But isn't that fear of having someone else get there first just a fear that they won't be able to maximize their profit if that happens? Otherwise, why would they be so worried about it?

sirspacey|2 years ago

Fully agree on open models, but I think there’s more going on that is important to consider in our own founding journies

It’s not just that there are billions to be made (they always believed that) it’s that people are making billions right now turning them into a paper tiger

When only the tech sector cares about a company it’s fairly straightforward for them to be values driven - necessary even. Engineers generally, especially early adopters, are thoughtful & ethical. They also tend to be fact driven in assessing a company’s intentions.

Once a company exits the tech culture bubble, misinformation & political footballs are the game. Defending against it is something every company learns quick. It is existential & the playing field is perpetually unfair.

cyanydeez|2 years ago

basically, you're discussing enshittification. When things get social momentum, those things get repurposed for capitalistic pleasure.

RespectYourself|2 years ago

OpenAI: pioneer in the field of fraudulently putting "open" in your name and being anything but.

quantum_state|2 years ago

Similar naming pattern, like North Korea calls itself “ Democratic People's Republic of Korea” … it cannot be further from being democratic.

zo1|2 years ago

Side note of a kinda similar thing happening, forgive me for the sidetrack and side-rant.

PrivatePropery <- was a website in South Africa setup in a market where all real-estate sales was controlled and gate kept by real-estate agents (assisted by Lawyers, various government bodies and even legislation), and its purpose was to allow "Private" individuals to put up their own properties for rent or sale.

Predictably, it eventually got take over by real-estate agents that posed as "private" sellers, and then that caused the entire site to support "Agents" as a concept and here we are. Today, you will hardly ever find a private individual on there and the company makes no effort at all to root them out. The agents just spam all their listings, lie on the metadata for properties, add duplicates, make zero-effort postings and use skew photos, the works.

Another example if you will, AirBnB. Taken over (I exaggerate a bit) by management companies that own many many properties and allocate an "agent" to oversee each property. At least here in South Africa, that is. Might not be that true in other countries, but it's on its way there. Mark my words.

Or more:

Pricecheck <-- Another South African website. Still claims to be a price-comparison website, but is really just like Google shopping, that doesn't do any scraping of prices, but simply "partners" with websites that give it a kickback after a user purchases something.

cbsmith|2 years ago

Orwell would be proud.

insane_dreamer|2 years ago

should be added to the Newspeak dictionary

anigbrowl|2 years ago

part of human nature and will always be

What if we just made it illegal for corporate entities (including nonprofits) to lie? If a company promises to undertake some action that's within its capacity (as opposed to stating goals for a future which may or may not be achievable due to external conditions), then it has to do with a specified timeframe and if it doesn't happen they can be sued or prosecuted.

> But then they will just avoid making promises

And the markets they operate in, whether commercial or not, will judge them accordingly.

gwbrooks|2 years ago

That's not a corporate-law issue -- it's a First Amendment issue with a lot of settled precedent behind it.

tl;dr: You're allowed to lie, as a person or a corporation, as long as the lie doesn't meet pretty high bars for criminal behavior or public harm.

Heck, you can even shout fire in a crowded theater, despite the famous quote that says you can't.