The question of whether or not AlphaGo is 'really' intelligent is irrelevant to whether it can beat me at chess. The question of whether or not the Pentagon's integrated AI system is really intelligent is similarly irrelevant to whether or not it might undertake a program of action it's creators would object to if they understood what it meant.
is it? It seemed to me like author assumes AI decision making will be roughly equivalent to biological decision making, just faster. I thought one of the Chinese room arguments is that biological decisions will always be "different" than AI ones.
Also it seemed like the author assumes great technological advances in AIs, but not in biology. If we're gonna dream shit up why not dream that brains in the future will be 10,000 times as dense and computers won't be able to keep up except as tools.
I think we will discover that our own consciousness works like the Chinese Room, and acknowledging and internalizing that will cause tremendous unrest between philosophers, computer scientists, neurologists, and other academic disciplines--potentially even including the law.
Symmetry|9 years ago
gwern|9 years ago
kayimbo|9 years ago
Also it seemed like the author assumes great technological advances in AIs, but not in biology. If we're gonna dream shit up why not dream that brains in the future will be 10,000 times as dense and computers won't be able to keep up except as tools.
snowwrestler|9 years ago
kobeya|9 years ago
KellhusSmellhus|9 years ago
arctangent|9 years ago
https://en.wikipedia.org/wiki/Philosophical_zombie