top | item 24476126

(no title)

80386 | 5 years ago

AI safety isn't an EA "preoccupation"; it's just weird enough and noticeable enough that it's easy to mistake existence and prevalence. It's also not even their weirdest position.

The first question on their list is about the 'problem' of wild animal suffering - and I've personally seen EAs argue that, because some animals are carnivorous, nature should be destroyed.

That's not even the weirdest position EAs take. Look up Brian Tomasik. Specifically, his paper about the possibility that electrons might suffer.

Concern about superhuman AI is one thing; bullet-biting utilitarianism is another entirely.

(This isn't the only place where their philosophical framework is stuck in the British Empire; they also tend to take a teleological view of history and moral development, and believe that their views are the self-evident progression of ethical development that every culture and civilization will come to eventually. They may not be as bad about this now as they used to be - there are questions about China now - but I don't think they're quite to the point of coming to terms with cultural contingency yet.)

discuss

order

Noos|5 years ago

It's a preoccupation because EA is mostly a rationalist thing, and Elizier Yudlowsky has had tremendous influence on that movement by being involved with Less Wrong. His views on AI have kind of become a mainstream position among them.

80k hours is more a cultural snapshot of the rationalist movement than anything.