top | item 38383622

(no title)

zerohalo | 2 years ago

Google, Meta and now OpenAI. So long, responsible and safety AI guardrails. Hello, big money.

Disappointed by the outcome, but perhaps mission-driven AI development -- the reason OpenAI was founded -- was never possible.

Edit: I applaud the board members for (apparently, it seems) trying to stand up for the mission (aka doing the job that they were put on the board to do), even if their efforts were doomed.

discuss

order

paulddraper|2 years ago

> I applaud the board members for (apparently, it seems) trying to stand up for the mission

What about this is apparent to you?

What statement has the board made on how they fired Altman "for the mission"?

Have I missed something?

alsetmusic|2 years ago

To me, commentary online and on podcasts universally leans on the idea that he appears to be very focused on money (from the outside) in seeming contradiction to the company charter:

> Our primary fiduciary duty is to humanity.

Also, the language of the charter has watered down a stronger commitment that was in the first version. Others have quoted it and I'm sure you can find it on the internet archive.

risho|2 years ago

you just don't understand how markets work. if openai slows down then they will just be driven out by competition. that's fine if that's what you think they should do, but that won't make ai any safer, it will just kill openai and have them replaced by someone else.

zerohalo|2 years ago

you're right about market forces, however:

1) openAI was explicitly founded to NOT develop AI based on "market forces"; it's just that they "pivoted" (aka abandoned their mission) once they struck gold in order to become driven by the market

2) this is exactly the reasoning behind nuclear arms races

WanderPanda|2 years ago

You can still be a force for decentralization by creating actually open ai. For now it seems like Meta AI research is the real open ai