top | item 26218337

(no title)

thesteamboat | 5 years ago

I have seen you and others make this point in the past, it and always seems equivalent to creationists shouting "Scientists who believe in evolution are the REAL dogmatists who don't want to look at the facts! We are really being persecuted by the religion of Science!"

Your talk on this subject (helpfully posted below) only furthers this when you present a variety of arguments of wildly varying strengths against AI, in the same vein as "37 challenges to evolution". It makes me feel like you have some valid criticism and a lot of bad faith argumentation.

Sadly, I think this characterization is only mostly unfair rather than entirely so. I enjoy your writing and thoughts, and you speak with clarity, but on this subject I think you have constructed a mold of bad-AI-argumentation that you squeeze all AI-argumentation into and in so doing fail to rebut any of it.

discuss

order

idlewords|5 years ago

The difference here (that I hardly believe needs explaining) is that the argument for evolution and natural selection is firmly rooted in observed reality. Even if you are a die-hard creationist, you don't dispute the existence of a wealth of evidence ready at hand that can be marshalled in the argument, for or against. We live on a planet teeming with life, and we are all agreed on the problem that needs explaining (we started with hydrogen and got armadillos, what happened?)

Compare this to the debates about hyperintelligence and the anticipated behavior of posited hyperintelligent beings. These consist of (1) an argument by extrapolation that machines or other organisms can surpass human intelligence bolted onto (2) endless from-first-principles blathering about how such a posited hyperintelligent entity would behave. It's like looking at the vial of hydrogen in an otherwise empty universe and trying to deduce things about armadillos from it. In fact, it's much worse, since we are trying to infer things about hypothesized beings who are, by definition, beyond the ability of our minds to encompass.

This is exactly unlike any scientific discourse, and exactly like deist arguments about the nature of the gods where (1) you first prove a God must exist, from whatever 'unmoved mover' argument you find personally convincing, and then (2) infer a huge mount of information about that God's behavior (omniscient, benevolent, likes justice) through a series of intellectual non sequiturs. The only innovation here is people swearing up and down that we can build the god ourselves.

The one thing we know about hyperintelligence, in any form, is that it is fundamentally not something whose behavior and nature we can infer from first principles, for the same reasons your cat will never understand why you didn't get tenure.

I see nothing wrong in the intellectual project of trying to frame questions about the nature of intelligence, and how computers can behave in intelligent ways. Nor do I think it's foolish to wonder about what it would take for machine intelligence for arise, or approach even deeper questions, like the physical basis of consciousness.

That's not what we see here, though. Rationalists like gwern take the football and run it into the end zone of GAME THEORY, in the end revealing far more about themselves, their anxieties, and their hopes than shedding any light on the world we inhabit together. They are accompanied by a bunch of otherwise smart people who have scared themselves silly by the prospect that we might build these gods by accident, abruptly, and that we are at imminent risk of doing so.

And to the extent that large numbers of smart people (including ones with access to great wealth) have bought into what is fundamentally an apocalyptic religious cult, I think we at least need to call it by the right name.

peripitea|5 years ago

I find this type of argument confusing. Let's say we did live in a world where their hypotheses were true. To make it concrete, let's say that a SuperAI was 20 years away and would wipe out humanity. How would we be able to know that right now, other than through the type of speculative inference you're criticizing?

Put another way, just because something is impossible to demonstrate conclusively and/or via observed reality does not automatically mean it can't be true, right? Of course it does mean that these claims warrant far more skepticism, and probably the large majority of them are untrue. But it seems obviously incorrect to automatically assume that they can not be true. Am I misunderstanding your reasoning in some way?