(no title)
thesteamboat | 5 years ago
Your talk on this subject (helpfully posted below) only furthers this when you present a variety of arguments of wildly varying strengths against AI, in the same vein as "37 challenges to evolution". It makes me feel like you have some valid criticism and a lot of bad faith argumentation.
Sadly, I think this characterization is only mostly unfair rather than entirely so. I enjoy your writing and thoughts, and you speak with clarity, but on this subject I think you have constructed a mold of bad-AI-argumentation that you squeeze all AI-argumentation into and in so doing fail to rebut any of it.
idlewords|5 years ago
Compare this to the debates about hyperintelligence and the anticipated behavior of posited hyperintelligent beings. These consist of (1) an argument by extrapolation that machines or other organisms can surpass human intelligence bolted onto (2) endless from-first-principles blathering about how such a posited hyperintelligent entity would behave. It's like looking at the vial of hydrogen in an otherwise empty universe and trying to deduce things about armadillos from it. In fact, it's much worse, since we are trying to infer things about hypothesized beings who are, by definition, beyond the ability of our minds to encompass.
This is exactly unlike any scientific discourse, and exactly like deist arguments about the nature of the gods where (1) you first prove a God must exist, from whatever 'unmoved mover' argument you find personally convincing, and then (2) infer a huge mount of information about that God's behavior (omniscient, benevolent, likes justice) through a series of intellectual non sequiturs. The only innovation here is people swearing up and down that we can build the god ourselves.
The one thing we know about hyperintelligence, in any form, is that it is fundamentally not something whose behavior and nature we can infer from first principles, for the same reasons your cat will never understand why you didn't get tenure.
I see nothing wrong in the intellectual project of trying to frame questions about the nature of intelligence, and how computers can behave in intelligent ways. Nor do I think it's foolish to wonder about what it would take for machine intelligence for arise, or approach even deeper questions, like the physical basis of consciousness.
That's not what we see here, though. Rationalists like gwern take the football and run it into the end zone of GAME THEORY, in the end revealing far more about themselves, their anxieties, and their hopes than shedding any light on the world we inhabit together. They are accompanied by a bunch of otherwise smart people who have scared themselves silly by the prospect that we might build these gods by accident, abruptly, and that we are at imminent risk of doing so.
And to the extent that large numbers of smart people (including ones with access to great wealth) have bought into what is fundamentally an apocalyptic religious cult, I think we at least need to call it by the right name.
peripitea|5 years ago
Put another way, just because something is impossible to demonstrate conclusively and/or via observed reality does not automatically mean it can't be true, right? Of course it does mean that these claims warrant far more skepticism, and probably the large majority of them are untrue. But it seems obviously incorrect to automatically assume that they can not be true. Am I misunderstanding your reasoning in some way?