Gnuffles's comments

Gnuffles | 7 years ago | on: Can “effective altruism” maximise the bang for each charitable buck?

Our intuition regarding communities stems from a history where we only ever communicated with those near us. Right now, we are able to go anywhere in the world within a day. We are able to communicate worldwide within seconds. Clearly our intuition has not caught up, but maybe given these changes in our community (which is now far less defined just by some kind of radius around our location), we should update our actions to reflect this?

It makes sense not to feel morally responsible for things that happen out of view when you cannot know what is going on and have no way to impact it. Thing is, we do know a lot about what is going on and we do have tools to impact it. It just doesn't feel intuitively satisfying. It is possible to internalize this satisfaction regardless, though.

Gnuffles | 7 years ago | on: Can “effective altruism” maximise the bang for each charitable buck?

I feel like this criticism is largely unfair.

First off, yes, there is overlap between effective altruism and the rationalism community. I think that makes sense when you want to try to use reason instead of intuition to make decisions.

I fail to see why you should qualify them as "so called" rationalists. Though besides Yudkowsky (and maybe even him) I honestly haven't heard enough of them to defend them. If you do want to sling in some defamatory remarks at their expense I feel like they should be backed up (or left unsaid), though.

You mention that some people see AI as the greatest risk to humanity. Perhaps I misunderstand but the way you phrase this, it sounds like you think this is absolutely ridiculous. If so, why would that be? And what long list of assumptions would you need to agree to? And why would it be bad for there to be a common starting point for discussing this? I think there's a fair amount of uncertainty in how any superintelligent entity would act, so a certainty of AI being terrible seems silly. However, a strong belief that it can pose a large threat seems, honestly, evident.

You say people use "AI risk" as a front to recruit people into their belief system. There is so much wrong with this... first off, why is it a front? That implies deception. Secondly, it is one of many facets. The core of EA is the desire to do have a (large) positive impact. If some people think that they can make their impact by working on the AI safety issue, why do you feel the need to portray that as nefarious? Finally, "belief system" sounds incredibly dogmatic. EA is not a church. Yes, there is a set of beliefs that most people in the EA community would ascribe to. But I don't experience EA as some echo chamber where everyone is forced into some kind of mold. Rather, people challenge both each other's and their own ideas. There's inevitably going to be some biases and filters, but your portrayal of EA as a cult (purposefully or not) is inaccurate.

As far as EA communities tending to be alt-right... what on earth are you smoking? I help run a local chapter and the focus is highly left-wing. And anyone I've noticed that's slightly more right wing is definitely not of the misoginistic or racist side. I recently listened to an 80000 hours podcast with Bryan Caplan, and noticed he's libertarian. While I think libertarian views are mostly bonkers, at the very least the way his libertarian views showed (e.g. arguing for open borders) are not insane on the level I'm used to from libertarians. Either way, this is an exception. Even if you can list some well known names that also have some strange views, I can say with a high degree of certainty that it is not even remotely representative of the community as a whole, especially not as I've seen it in the Netherlands.

FINALLY: I've honestly only ever seen Roko's Basilisk being mentioned on a meme page for EA. So much for taking it seriously.

page 1