top | item 39611650

(no title)

LatticeAnimal | 2 years ago

> As we get closer to building AI, it will make sense to start being less open. The Open in OpenAI means that everyone should benefit from the fruits of AI after its built, but it's totally OK to not share the science (even though sharing everything is definitely the right strategy in the short and possibly medium term for recruitment purposes).

That is surprisingly greedy & selfish to be boasting about in their own blog.

discuss

order

Jensson|2 years ago

Yeah, they are basically saying that they called themselves OpenAI as a recruitment strategy but they never planned to be open after the initial hires.

Spivak|2 years ago

Why do tech people keep falling for this shtick? It's happened over and over and over with open source becoming open core becoming source available being becoming source available with closed source bits.

How society organizes property rights makes it damn near impossible to make anything commons in a way that can't in practice be reversed when folks see dollar signs. Owner is a non nullable field.

Aeolun|2 years ago

They’re pretty open about that now though.

davej|2 years ago

I think you're misreading the intention here. The intention of closing it up as they approach AGI is to protect against dangerous applications of the technology.

That is how I read it anyway and I don't see a reason to interpret it in a nefarious way.

_heimdall|2 years ago

Two things that jump out at me here.

First, this assumes that they will know when they approach AGI. Meaning they'll be able to reliably predict it far enough out to change how the business and/or the open models are setup. I will be very surprised if a breakthrough that creates what most would consider AGI is that predictable. By their own definition, they would need to predict when a model will be economically equivalent to or better than humans in most tasks - how can you predict that?

Second, it seems fundamentally nefarious to say they want to build AGI for the good of all, but that the AGI will be walled off and controlled entirely by OpenAI. Effectively, it will benefit us all even though we'll be entirely at the mercy of what OpenAI allows us to use. We would always be at a disadvantage and will never know what the AGI is really capable of.

This whole idea also assumes that the greater good of an AGI breakthrough is using the AGI itself rather than the science behind how they got there. I'm not sure that makes sense. It would be like developing nukes and making sure the science behind them never leaks - claiming that we're all benefiting from the nukes produced even though we never get to modify the tech for something like nuclear power.

woopsn|2 years ago

Many people consider what OpenAI is building to be the dangerous application. They don't seem nefarious to me per se, just full of hubris, and somewhat clueless about the consequences of Altman's relationship with Microsoft. That's all it takes though. The board had these concerns and now they're gone.

sillysaurusx|2 years ago

"Tools for me, but not for thee."

gyudin|2 years ago

Sounds pretty much like any other corpo “Pay us bucks and benefit from our tech”