(no title)
995533 | 7 years ago
So unless you pose that a function has to rely on its materialization (there is something untouchably magic about biological neural networks, and intelligence is not multiple realizable), it should be possible to functionally model intelligence. Nature shows the way.
AGI will likely obsolete humanity. Either depricate it, or consume it (make us part of the Borg collective). Heck, even a relatively dumb autonomous atom bomb or computer virus may be enough to wipe humanity from the face of the earth.
nradov|7 years ago
Even if we assume for the sake of argument that AGI is possible, there's no scientific basis to assume that will make humanity obsolete. For all we know there could be fundamental limits on cognition. A hypothetical AGI might be no smarter than humans, or might be unable to leverage it's intelligence in ways that impact us.
Nuclear weapons and malware can cause damage but there's no conceivable scenario where they actually make us extinct.
995533|7 years ago
I agree our knowledge currently is lacking, but see no reasons why this will never catch up.
There are fundamental limits on cognition. For one our universe is limited by the amount of computing energy available. Plenty of problems can be fully solved, to where it does not matter if you are increasingly more intelligent (beyond a certain point, two AGI's will always draw at chess). Another limit is practical: the AGI needs to communicate with humans (if we manage to keep control of it), so it may need to dumb down so we can understand it.
Even an AGI as smart as the smartest human will greatly outrun us: it can duplicate and focus on many things in parallel. Then the improved bandwith between AGI's will do the rest (humans are stuck with letters and formulas and coffee breaks).
Manually deployed atom bombs and malware can already wreck us. No difference with autonomous (cyber)weapons.