top | item 38375362

(no title)

ah765 | 2 years ago

I think the middle ground where several of the OpenAI board members were trying for is to "responsibly develop AGI", which means developing at a moderate pace while trying to avoid kicking off an investment gold rush through making heavily commercial use cases, and spending a substantial amount of resources on promising safety research (such as Ilya's work).

In my opinion, it was not a very strong position because the allure of money and trying to be the biggest is too strong (as we're seeing now), but I think it was at least coherent.

discuss

order

remarkEon|2 years ago

So, no, the blanks have not been filled in then. Because, for those in Toner's camp, that middle ground is just "progress from GPT to T-1000, but slowly", right? Yudkowsky talks about AIs 3D printing themselves into biological entities and killing us all because humans are an inefficient use of matter. It's not a strong position to me because it sounds ridiculous, not because there's greed involved.

dmix|2 years ago

It takes some level of delusion of grandeur to think half a board of a single non profit that just happens to be the first mover can stop the full forces of American capitalism - although I kind of respect the drive/purpose.

They could easily lose any power they had to guide the industry, it was a huge gamble. I remember reading a Harvard business school study showing the first mover advantage repeatedly turned out to be ineffective in the tech industry as there is a looong series of early winners dying out to later market entrants like Friendster->FB, Google, a bunch of dotcom era Ecommerce companies predating Amazon, etc.

They need full industry/society buy in - at an ideological level - to win this battle, they won't win through backroom dealing in a boardroom while losing 90% of their own staff.