I find it amusing that so many people is skeptic about general AI, trying to come up with arguments that it will never happen, while at the same time we do not understand how our own intelligence work. Go figure out.
Mog: “What truly is fire? The divine blessing stolen by Prometheus? Concentrated Phlogiston? The element of change? Is it not madness to seek to create something we don’t even have a good definition of?”
Grog: “Grog rubs two sticks together” Lowers voice and looks around furtively “really hard.”
Not really. Build it, test it, notice it fails to meet what we expect of intelligence under conditions X, tweak it to fix that failure and repeat until we can't find any further failures. Then we'll have a formal model of intelligence that counts as a definition.
We use tons and tons of pharmaceuticals whose mechanism of action is poorly understood at best or in a few cases not at all.
We are even able to predict whether other compounds might work without knowing why just based on structural similarity.
It’s ideal to know the full mechanism and it obviously aids engineering but there’s no reason you have to wait for that to use something. People used fire for millennia before oxidation reduction reactions were understood.
Just imagine if someone actually have success and an AGI just start happily chatting about everything, properly solving general problems like a 30 something engineer, mathematician or whatever.
No one in the planet would know how to be sure it's not just emulating human behavior.
Sorry but your argument doesn’t make sense. Engineering/building is different than understanding how things works. Surely is related: if you build something you usually can learn how things work - and vice-versa.
Insane to me is to affirm that something has a limit when you don’t fully understand it.
The basic problem is that it seems pretty clear that human intelligence is not anything like a Turing machine and no one has presented any computational system not equivalent to a Turing machine. Some might conclude this is a foundational problem.
It does not follow, merely from noting that there are differences between human intelligence and a Turing machine, that no "merely" Turing-equivalent device could display human-level intelligence. Every attempt I have seen so far to carry through that argument either ends up begging the question or becoming an argument from incredulity.
Note that this is not an argument that it is possible, which certainly has not been conclusively established.
bamboozled|3 years ago
nohat|3 years ago
Grog: “Grog rubs two sticks together” Lowers voice and looks around furtively “really hard.”
naasking|3 years ago
api|3 years ago
We are even able to predict whether other compounds might work without knowing why just based on structural similarity.
It’s ideal to know the full mechanism and it obviously aids engineering but there’s no reason you have to wait for that to use something. People used fire for millennia before oxidation reduction reactions were understood.
nb3423|3 years ago
No one in the planet would know how to be sure it's not just emulating human behavior.
estevaoam|3 years ago
Insane to me is to affirm that something has a limit when you don’t fully understand it.
hexomancer|3 years ago
dinkumthinkum|3 years ago
mannykannot|3 years ago
Note that this is not an argument that it is possible, which certainly has not been conclusively established.