top | item 9135994

(no title)

zep15 | 11 years ago

I find it odd that AI risk has become such a hot topic lately. For one, people are getting concerned about SMI at a time when research toward it is totally stalled---and I say that as someone who believes SMI is possible. Stuff like deep learning, as impressive as the demos are, is not an answer to how to get SMI, and I think ML experts would be the first to admit that!

On top of that, nothing about the AI risk dialogue is new. Here's John McCarthy [1] writing in 1969:

> [Creating strong AI by simulating evolution] would seem to be a dangerous procedure, for a program that was intelligent in a way its designer did not understand might get out of control.

Here's someone thinking about AI risk 46 years ago! The ideas put forward recently by Sam Altman and others are ideas that have occurred to many smart people many times, and they haven't really gone anywhere (e.g., at no point between 1969 and now has regulation been enacted). I wish people would ask themselves why that is before making so much noise about the topic. The only people influenced by that noise are laypeople, and the message they're getting is "AI research = reckless", which is a very counterproductive message to be sending.

[1] McCarthy, John, and Patrick Hayes. Some philosophical problems from the standpoint of artificial intelligence. USA: Stanford University, 1968.

discuss

order

No comments yet.