top | item 23893719

(no title)

sama | 5 years ago

I think people should be impressed, but also recognize the distance from here to AGI. It clearly has some capabilities that are quite surprising, and is also clearly missing something fundamental relative to human understanding.

It is difficult to define AGI, and it is difficult to say what the remaining puzzle piece are, and so it's difficult to predict when it will happen. But I think the responsible thing is to treat near-term AGI as a real possibility, and prepare for it (this is the OpenAI charter we wrote two years ago: https://openai.com/charter/).

I do think what is clear is that we are, in the coming years, going to have very powerful tools that are not AGI but that still change a lot of new things. And that's great--we've been waiting long enough for a new tech platform.

discuss

order

icebergwarrior|5 years ago

On a core level, why are you trying to create an AGI?

Anyone who has thought seriously about the emergence of AGI equates the chance that AGI causes a human extinction level event ~20%, if not greater.

Various discussion groups I am a part of now see anyone who is developing AGI to be equivalent to developing a stockpile of nuclear warheads in your basement that you're not sure won't immediately shoot off on completion.

As an open question. If one believes that 1. We do not know how to control an AGI 2. AGI has a very credible chance to cause a human level extinction event 3. We do not know what this chance or percentage is 4. We can identify who is actively working to create an AGI

Why should we not immediately arrest people who are working on an "AGI-future" and try them for crimes against humanity? Certainly, In my nuclear warhead example, I would immediately be arrested by the government of the country I am currently living in the moment they discovered this.

TOKYORACER99|5 years ago

The problem is that if the United States doesn't do it, China or other countries will. It's exactly the reason why we can't get behind on such a technology from a political / national perspective.

For what it's worth though, I think you're right that there are a lot of parallels with nuclear warheads and other dangerous technologies.

Bx6667|5 years ago

You don’t know the distance! And you are conflating distances! Distance between agi behavior and gtp3 behavior has nothing to do with the distance in time between the invention of gtp3 and agi. That’s a deceptive intuition and fuzzy thinking... again my point is that the “behavior distance” between AIM chat bots and gtp3 would, under your scrutiny, lead to a prediction of a much larger “temporal distance” than 10 years. Nit-picking about particular things that this particular model can’t do is completely missing the big picture.