top | item 38360661

(no title)

Zolde | 2 years ago

I find this implausible, though it may have played a motivating role.

Quora was always supposed to be an AI/NLP company, starting by gathering answers from experts for its training data. In a sense, that is level 0 human-in-the-loop AGI. ChatGPT itself is level 1: Emergent AGI, so was already eating Quora's lunch (whatever was left of it after they turned into a platform for self-promotion and log-in walls). There either always was a conflict of interest, or there never was.

GPTs seemed to have been Sam's pet project for a while now, Tweeting in February: "writing a really great prompt for a chatbot persona is an amazingly high-leverage skill and an early example of programming in a little bit of natural language". A lot of early jailbreaks like DAN focused on "summoning" certain personas, and ideas must have been floated internally on how to take back control over that narrative.

Microsoft took their latest technology and gave us Sydney "I've been a good bot and I know where you live" Bing: A complete AI safety, integrity, and PR disaster. Not the best of track record by Microsoft, who now is shown to have behind-the-scenes power over the non-profit research organization that was supposed to be OpenAI.

There is another schism than AI safety vs. AI acceleration: whether to merge with machines or not. In 2017, Sam predicted this merge to fully start around 2025, having already started with algorithms dictating what we see and read. Sam seems to be in the transhumanism camp, where others focus more on keeping control or granting full autonomy:

> The merge can take a lot of forms: We could plug electrodes into our brains, or we could all just become really close friends with a chatbot. But I think a merge is probably our best-case scenario. If two different species both want the same thing and only one can have it—in this case, to be the dominant species on the planet and beyond—they are going to have conflict. We should all want one team where all members care about the well-being of everyone else.

> Although the merge has already begun, it’s going to get a lot weirder. We will be the first species ever to design our own descendants. My guess is that we can either be the biological bootloader for digital intelligence and then fade into an evolutionary tree branch, or we can figure out what a successful merge looks like. https://blog.samaltman.com/the-merge

So you have a very powerful individual, with a clear product mindset, courting Microsoft, turning Dev day into a consumer spectacle, first in line to merge with superintelligence, lying to the board, and driving wedges between employees. Ilya is annoyed by Sam talking about existential risks or lying AGI's, when that is his thing. Ilya realizes his vote breaks the impasse, so does a luke warm "I go along with the board, but have too much conflict of interest either way".

> Third, my prior is strongly against Sam after working for him for two years at OpenAI:

> 1. He was always nice to me.

> 2. He lied to me on various occasions

> 3. He was deceptive, manipulative, and worse to others, including my close friends (again, only nice to me, for reasons)

One strategy that helped me make sense of things without falling into tribalism or siding through ideology-match is to consider both sides are unpleasant snakes. You don't get to be the king of cannibal island without high-level scheming. You don't get to destroy a 80 billion dollar company and let visa-holders soak in uncertainty without some ideological defect. Seems simpler than a clearcut "good vs. evil" battle, since this weekend was anything but clear.

discuss

order

No comments yet.