top | item 47165757

(no title)

drzaiusx11 | 3 days ago

Public benefit corporations in the AI space have become a farce at this point. They're just regular corporations wearing a different hat, driven by the same money dynamics as any other corp. They have no ability to balance their stated "mission" with their drive for profit. When being "evil" is profitable and not-evil is not, guess which road they'll take...

discuss

order

coldtea|3 days ago

In general public benefit corporations and non-profits should have a very modest salary cap for everybody involved and specific public-benefit legally binding mission statements.

Anybody involved should also be prohibited from starting a private company using their IP and catering to the same domain for 5-10 years after they leave.

Non-profits where the CEO makes millions or billions are a joke.

And if e.g. your mission is to build an open browser, being paid by a for-profit to change its behavior (e.g. make theirs the default search engine) should be prohibited too.

ACCount37|3 days ago

"A very modest salary cap" works if your mission is planting trees. Not so much if what you're building is frontier AI systems.

jkestner|3 days ago

It’s not the CEO’s fault - they had to take all that money to keep their org a non-profit.

B corps are like recycling programs, a nice logo.

drzaiusx11|3 days ago

If we're speaking in generalities of corporations in this space, it's all a joke now, at least from my vantage point. I just don't find it very funny.

abigail95|3 days ago

What's the salary cap for hiring a team to build a frontier model? These kind of rules will make PBCs weaker not stronger.

OkayPhysicist|3 days ago

You're overthinking this. Just give the beneficiaries of the corporation (which in the context of a "public" benefit corporation is the public) the grounds to sue if the company reneges on their mission, the same way shareholders can sue if a company fails to act in their interest.

heavyset_go|3 days ago

PBCs are peak End of History liberal philanthropy that speak to the kind of person whose solution to any problem is "throw a startup at it"

nozzlegear|3 days ago

Fukuyama wasn't wrong, he was just early

logicallee|3 days ago

>Public benefit corporations in the AI space have become a farce at this point. They're just regular corporations wearing a different hat, driven by the same money dynamics as any other corp.

Could you describe the model that you think might work well?

nozzlegear|3 days ago

It sounds like OP thinks AI companies should just stop pretending that they care about the public benefit, and be corporations from the start. Skip the hand wringing and the will they/wont they betray their ethics phases entirely since everyone knows they're going to choose profit over public benefit every time.

That model already exists and has worked well for decades. It's called being a regular ass corporation.

Forgeties79|3 days ago

I feel like we went through this exact situation in the 2010s of social media companies. I don’t get why people defend these companies or ever believe they have any sense of altruism

kelvinjps10|3 days ago

Also, it seems to be the era where the government takes backdoor access to these services and data, as the did with social media

lenerdenator|3 days ago

Well, now I'm wondering, if the company was chartered with the public benefit in mind, could you not sue if they don't follow through with working in the public interest?

If regular corporations are sued for not acting in the interests of shareholders, that would suggest that one could file a suit for this sort of corporate behavior.

I'm not even a lawyer (I don't even play one on TV) and public benefit corporations seem to be fairly new, so maybe this doesn't have any precedent in case law, but if you couldn't sue them for that sort of thing, then there's effectively no difference between public benefit corporations and regular corporations.

hluska|3 days ago

I really don’t see it. PBCs are dual purpose entities - under charter, they have a dual purpose of making profit while adding some benefit to society. Profit is easy to define; benefit to society is a lot more difficult to define. That difficulty is reflected at the penalty stage where few jurisdictions have any sort of examination of PBC status.

This is what we were all going on about 15 years ago when Maryland was the first state to make PBCs legal. We got called negative at the time.

Hamuko|3 days ago

I think public benefit corporations (like Anthropic) are quite poorly defined so I'm not sure how successful a lawsuit is.

latexr|3 days ago

> Public benefit corporations in the AI space have become a farce at this point.

“At this point”? It was always the case, it’s just harder to hide it the more time passes. Anyone can claim anything they want about themselves, it’s only after you’ve had a chance to see them in the situations which test their words that you can confirm if they are what they said.

drzaiusx11|2 days ago

I presume in the beginning, many at OpenAI actually believed in the mission. Their good will simply was corrupted by the mountains of money on the table.

neya|3 days ago

I was a Pro subscriber until last week. When I was chatting with Claude, it kept asking a lot of personal questions - that seemed only very very vaguely relevant to the topic. And then it struck me - all these AI companies are doing are just building detailed user models for being either targeted for advertising or to be sold off to the highest bidder. It hasn't happened yet with Anthropic, but when the bubble money runs out, there's not gonna be a lot of options and all we'll see is a blog post "oops! sorry we did what we promised you we wouldn't". Oldest trick in the tech playbook.

dibujaron|3 days ago

A less cynical explanation: It's heavily trained to ask follow-up questions at the end of a response, to drive more conversation and more engagement. That's useful both for making sure you want to renew your subscription, and also probably for generating more training data for future models. That's sufficient explanation for the behavior we're seeing.

Schlagbohrer|3 days ago

Pete Hegseth also threatened to take, by dictat, everything Anthropic has. He can do that with the Defense Industrial Act or whatever its called if he designates them as critical to national defense.

nozzlegear|3 days ago

It would've been better PR for Anthropic to let Hegseth do that instead of fold at the slightest hint of pressure and lost contract money. I've canceled my Claude subscription over this (and made sure to let them know in the feedback).

bn_layc|3 days ago

He seems to be the driving force behind all this. Mediocrities are attracted to AI like moths.

The press always say "the Pentagon negotiates". Does any publication have an evidence that it is "the Pentagon" and not Hegseth? In general, I see a lot of common sense from the real Pentagon as opposed to the Secretary of War.

I hope Westpoint will check for AI psychosis in their entrance interviews and completely forbid AI usage. These people need to be grounded.

lprhrp|3 days ago

Hmm, that could be the best "IPO" they'll ever get. Better check if Trump Jr.'s 1789 capital has shares like they did in groq (note the "q").