(no title)
sama | 5 years ago
It is difficult to define AGI, and it is difficult to say what the remaining puzzle piece are, and so it's difficult to predict when it will happen. But I think the responsible thing is to treat near-term AGI as a real possibility, and prepare for it (this is the OpenAI charter we wrote two years ago: https://openai.com/charter/).
I do think what is clear is that we are, in the coming years, going to have very powerful tools that are not AGI but that still change a lot of new things. And that's great--we've been waiting long enough for a new tech platform.
icebergwarrior|5 years ago
Anyone who has thought seriously about the emergence of AGI equates the chance that AGI causes a human extinction level event ~20%, if not greater.
Various discussion groups I am a part of now see anyone who is developing AGI to be equivalent to developing a stockpile of nuclear warheads in your basement that you're not sure won't immediately shoot off on completion.
As an open question. If one believes that 1. We do not know how to control an AGI 2. AGI has a very credible chance to cause a human level extinction event 3. We do not know what this chance or percentage is 4. We can identify who is actively working to create an AGI
Why should we not immediately arrest people who are working on an "AGI-future" and try them for crimes against humanity? Certainly, In my nuclear warhead example, I would immediately be arrested by the government of the country I am currently living in the moment they discovered this.
TOKYORACER99|5 years ago
For what it's worth though, I think you're right that there are a lot of parallels with nuclear warheads and other dangerous technologies.
Bx6667|5 years ago