My "Effective Altruism Is Not A Cult" t-shirt is prompting a lot of questions already answered by my shirt.
I've dug into this stuff over the last couple days, because I had previously understood Effective Altruism to be something akin to like a pledge drive for Charity Navigator's best-reviewed charities or something, and it turns out it's nothing of the sort. In fact, if you read some of the commentary from EA insiders about Sam Bankman-Fried (these come up in stories where EA distances itself from SBF, or claims they knew he was up to no good all along), it's hard to shake the comparisons to Scientology: they have their own weird language, there's a hierarchy, and people on the inside operate under a constant threat of ostracism.
To that, add that some of the ideas are clearly dementing. Take Scott Aaronson's recent post on SBF, which is shot through with EA-think; it includes this:
And a traditional investor who made billions on successful gambles, or arbitrage, or creating liquidity, then gave virtually all of it away to effective charities, would seem, on net, way ahead of most of us morally.
This is logic straight out of the 16th century Medici papacy of Leo X. Maybe it's some kind of horrible drug interaction with Rationalism, or maybe EA is fundamentally Rationalist? Either way: when your logic has established that you can obtain moral superiority by literally buying indulgences, it has officially proven too much, and if you can't recognize that, something has gone terribly wrong in your thinking. Over and over I keep coming across EA arguments that could be used to justify almost literally any behavior, so long as you can tell yourself a story about having a long-term positive goal.
Also: tens of millions in donations to "AI safety" organizations. Yes, to answer the Economist; this movement is irretrievable.
"Maybe it's some kind of horrible drug interaction with Rationalism, or maybe EA is fundamentally Rationalist?"
I have always associated EA with the rationalist movement, since I came across it via reading rationalist stuff.
If you define EA as "figuring out how to deploy a given set of resources so as to do the most good," your conclusions are going to depend on your value system: how do you define and measure good?
So when a bunch of rationalists who believe in consequentialism do that, they're going to maximize their value system and end up looking really weird to everyone else who doesn't share their values. You'd have the same outcome in a different flavor if a bunch of Christians did it.
I disagree with a bunch of the underlying values in the EA movement but I really admire the mindset that actually tries to solve the problem of "how can we be most effective with our giving?" instead of what I personally do, which is get overwhelmed by guilt and confusion, stop thinking about helping other people, and go back to my default of living my life mostly thinking about how to maximize my own comfort and preferences.
Sorry, scientology? This comes off as pretty bad faith.
Most EAs are people like me who read some essays, setup some recurring donations to GiveWell and call it a day. If that's how being a scientology member goes then it must be milder than I've been led to believe.
> I've dug into this stuff over the last couple days, because I had previously understood Effective Altruism to be something akin to like a pledge drive for Charity Navigator's best-reviewed charities or something, and it turns out it's nothing of the sort.
As someone with multiple EA friends: this is precisely what it is.
> they have their own weird language, there's a hierarchy, and people on the inside operate under a constant threat of ostracism.
The first is true, the latter two are bullshit.
> Either way: when your logic has established that you can obtain moral superiority by literally buying indulgences
That's a gross mischaracterization. A better metaphor is Robin Hood, someone who steals from the rich to give to the poor. And who is widely regarded as a folk hero.
> tens of millions in donations to "AI safety" organizations.
If AGI has even a 1% chance of happening this century and those donations reduce the risk of it going wrong and killing everyone by 1% that's still money very well spent.
> they have their own weird language, there's a hierarchy, and people on the inside operate under a constant threat of ostracism.
Most groups of people are like this to some extent. Most corporations are like this. Cultiness is a spectrum, and the comparison to scientology is a bit much as scientology does much worse things than that.
> Over and over I keep coming across EA arguments that could be used to justify almost literally any behavior, so long as you can tell yourself a story about having a long-term positive goal.
What you describe is a little too outcome based for my preferences, but you act as if outcome based morality isn't a common view. That viewing your moral standing as what the result of your actions are; how much happiness you have caused minus how much suffering is instrinsically unreasonable.
As far as "telling yourself a story" - that's how all morality works. There is no santa clause making a list of naughty or nice. You have to judge for yourself. Sure there are other metrics. I personally value intention, acting with honour, and ensuring the things you do can be universalized, but at the end of the day i am the one who has to verify of i a living up to my own moral precepts. If i wanted to delude myself into self-justifying something bad, it would probably be pretty easy as there is nobody but myself to stop me. I would even argue that is what being moral is: how resistant you are to petty self-justification.
Please do not forget that you can disapprove of all the longtermism stuff and of the alleged cult-like structure, and still think it's important to pledge your money to charity.
Please don't let the bad publicity around EA stop you from donating what you can, to charities that you trust and have a proven record of efficacy, whether from EA-related organization or other research. The original message still stands and should keep standing.
There are surely two separate issues here? One is whether it's bad to eg. take first world crypto investors' money and use it to stop hundreds of thousands of people dying of preventable diseases in developing countries. Personally, I think not really, or at least it'd be an odd thing to be upset about? Happy to hear the arguments in favour of huge numbers of preventable deaths.
Where that narrative gets problematic wrt SBF is that, aiui in the eyes of EA "leadership", "too many" EA individuals choose to allocate their donations to boring global health causes, rather than building the movement or weird long-termist stuff. So with that in mind, the obvious decision was to take the friendly billionaire money and put it into those "important" areas. Which means now, we have a popular and hard to refute characterization of EA as an organization that raises money for itself and weird causes normal people don't care about. But, like I said, the majority of EA money goes towards global health, ie. stopping people dying.
Been saying the same thing. EA functionally is the same pattern I grew up around among evangelical extremists. It's an all purpose source of self righteous authority based on theoretical virtue erasing actual harm.
Are you describing "giving virtually all of [your earnings] away to effective charities" as "literally buying indulgences"? That's literally the most cynical thing I've ever heard.
I can understand having a negative opinion of the EA movement as it exists today. But your reaction to the charitable investor thing makes me think that you actually disagree with... altruism. Or at least the idea that being more altruistic is morally superior than being less altruistic.
Depends what Aaronson meant by "being ahead morally"; it's unclear whether he is referring to morality in the sense of being virtuous, or in the sense of making other people better off. Hypothetical Charitable Investor's donations are a sign that that they might be rather virtuous, but they certainly aren't 1000 times more virtuous than the average person, and if it turns out that they secretly have a habit of kicking puppies, then they aren't virtuous at all. But it could certainly be true that, compared to an average person, Hypothetical Charitable Investor has had 1000 times the impact in terms of how they improved other people's lives.
> This is logic straight out of the 16th century Medici papacy of Leo X. Maybe it's some kind of horrible drug interaction with Rationalism, or maybe EA is fundamentally Rationalist? Either way: when your logic has established that you can obtain moral superiority by literally buying indulgences, it has officially proven too much,
I think you're missing the point of that paragraph, or at least misrepresenting it here.
It's not saying that what SBF did was ok or that fraud is ok or any such thing. It's specifically trying to say that just because this happened with crypto, which might be a bubble, doesn't make it any different than normal everyday finance. You might or might not agree with that idea, or might or might agree that finance itself is good, but that's the point of this paragraph specifically. Here it is in full, note the first sentence:
> Even if cryptocurrency remains just a modern-day tulip bulb or Beanie Baby, though, it seems morally hard to distinguish a cryptocurrency trader from the millions who deal in options, bonds, and all manner of other speculative assets. And a traditional investor who made billions on successful gambles, or arbitrage, or creating liquidity, then gave virtually all of it away to effective charities, would seem, on net, way ahead of most of us morally.
Also, just a note on this:
> Either way: when your logic has established that you can obtain moral superiority by literally buying indulgences, it has officially proven too much,
The problem with indulgences wasn't just the idea of being able to offset moral harm with moral good. Firstly, that position itself is at odds with what most EA people say afaict.
Secondly, the problem of indulgences is also that they are not real. I think even a religious person will agree that the indulgences of the 16th century was just a con. An atheist will dismiss the whole idea entirely.
The underlying logic of someone who made and gave away billions being overall ahead of other people morally is sound, IMHO. That of course is not to say that this then "allows" them to go and do some evil or something - that's not at all what the EA movement is saying.
> Maybe it's some kind of horrible drug interaction with Rationalism, or maybe EA is fundamentally Rationalist?
Depends what you mean by Rationalist. I would say that EA, the modern movement, is a hobby of people who belong to "rationalism", the modern movement. I'd be fairly comfortable calling them a cult. (Frequent exhortation: "Read the Sequences!") EA is the same thing, simply because the people who do EA and the people who do rationalism are the same people.
But from your comment, I can't tell whether Rationalist refers to that group or to some other thing.
> I had previously understood Effective Altruism to be something akin to like a pledge drive for Charity Navigator's best-reviewed charities or something
I mean, I think you were basically right before, and now you're over-reading a lot from an awkward sentence fragment. I agree that the phrase and implied comparison is garbage.
It seems kind of like hearing MTG talk about the Gazpacho Police or whatever and subsequently ascribing it to the GOP. Yes, the quote is clearly inane. Yes, she is a person of some influence in the GOP; no, that doesn't mean everyone in the GOP believes what MTG is saying. I guess you could say the GOP is a cult, but I think maybe cult doesn't mean anything if it applies to 40% of the US population.
Anyway, I totally agree as far as "AI safety." And fuck SBF and steal-to-give.
> Yes, to answer the Economist; this movement is irretrievable.
What name would you give to a movement to point charitable dollars at, say, the front page of GiveWell? (Givewell is more or less what EA means to me.)
> Either way: when your logic has established that you can obtain moral superiority by literally buying indulgences, it has officially proven too much, and if you can't recognize that, something has gone terribly wrong in your thinking
Buying indulgences was wrong because it was used to cover up moral wrongdoing or reduce ones penance for wrongdoing, not because the concept of money buying moral praise is intrinsically false or corrupt. It seems pretty clear from Scott's example that this is not the scenario he was describing.
Phrased differently, all else being equal, a person that contributed billions to charities deserves more moral praise than one who didn't or can't. Edit: and if you don't think so, then why do we give such people awards and recognition?
This won't be convincing to everyone of course, and I expect the dividing line will be whether you think utilitarianism is the correct way to approach ethics. Of course if you don't, then you won't subscribe to EA anyway.
The EA forum in fact uses the forum software developed by LessWrong, in very close collaboration with them (it's not open source or anything). In general it looks like the two organizations have a lot in common, and LessWrong is a poster child for the rationality movement.
If we're gonna make vaguely sweeping comparisons to religion, I would like to point out that much of this kind of instinctive aversion to EA probably comes, itself, from the secular protestantism pervading US culture.
- Outcome matters more than the intention of the person trying to do a good act? Absurd; here, I can even prove it with an absurd example!
- Trying, at all, to build a framework for good and bad different from that handed down to us? How dare they.
- Artificial general intelligence? Pointless to worry about, only humans can be really intelligent because [strikethrough]they have a soul[/strikethrough] there's something about consciousness that we don't yet understand.
> ... they have their own weird language, there's a hierarchy, and people on the inside operate under a constant threat of ostracism.
Aside: A good rule of thumb for when you should start getting out of something is when the group comes up with a cute little name for those outside the group, typically ending in 'ie'. Normies, heggies, fundies, libtards, etc. It starts out innocent enough, but when those artificial divisions spring up, that's when the brains start shutting off and people start preaching to the choir.
> Either way: when your logic has established that you can obtain moral superiority by literally buying indulgences, it has officially proven too much, and if you can't recognize that, something has gone terribly wrong in your thinking.
I see it as a natural outcome of consequentialism. Your ability to do good things is tied to your circumstances, which are not the same for everyone. Are you saying that you're certain that consequentialism is an invalid way to asess morality?
> Over and over I keep coming across EA arguments that could be used to justify almost literally any behavior
Let's stop being surprised by the self-serving rationalizations of (many) tech leaders. Expect it, stop trusting it by default, stop taking them seriously.
Or try this hypothesis: Whatever the issue, their theory (EA or whatever else) will always benefit them. Find a more predictive hypothesis if you can.
You say "justify almost anything" but I've always been surprised by the lack of distribution of wealth as an answer to helping people.
Maybe I've just missed it, but EA feels a lot like Communism if it was invented by Any Rand to me, so governments, democracy and regulation have always seemed notable by their absence.
Like most things: EA as a concept is fine. Finding the "maximal" way to improve the lives of others is great, and finding better ways to contribute than charity and volunteering is honorable.
The issue is when people take it to the extreme (like someone who literally argued that a person doing mere charity or volunteer work is bad, because they are wasting time and effort that they could've spent making money and recruiting to contribute more charity and volunteering), or just sidestepping "altruism" entirely and using the name and community as a stepping stone to push your unrelated scam.
I don't know whether or not SBF genuinely believed in effective altruism, but if he did, he let the "principles" override common sense.
> like someone who literally argued that a person doing mere charity or volunteer work is bad, because they are wasting time and effort that they could've spent making money
This is the part of EA that triggers a lot of red flags for me. Making money, or rather making significant money, necessitates participation in a system of exploitation that is largely responsible for the ills the charitable donations are meant to alleviate in the first place. Maximizing that money necessitates optimizing the exploitation.
the whole thing just seems like an exercise in morality-washing one's own greed. Lies, damn lies, and statistics perpetrated with "rationality".
The whole thing reminds me of "I, Robot", when the AI realizes that the greater good for humanity can't care for themselves, it is to be ruled by the AI for good.
This is the kind of vibe I get from this writeup, and the level of separation from humanity these people operate.
>The issue is when people take it to the extreme (like someone who literally argued that a person doing mere charity or volunteer work is bad, because they are wasting time and effort that they could've spent making money and recruiting to contribute more charity and volunteering)
I mean, the concern here is that coming off as an asshole can be self-defeating, not that these views are actually incorrect, right? Seems like it's like being an atheist - an "extreme view" for many and it's a topic that may need to be dealt with with caution sometimes but that's really mostly a critique of the sensitivities of others.
Effective altruism (EA) is a philosophical and social movement that advocates "using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis". [1]
Seems like a good idea, if only just to have some sort of open-system for rating charities. I suspect the issue is everything that comes around the idea, specially... people. The problem is always people :-). Perhaps a "Michelin Guide to Charities" would be easier to market and explain? :-p
My favorite explanation of how "effective altruism" works is this SMBC comic [2]. :-)
As an Effective Altruist I want to cure cancer, so I maximize my income by selling cigarettes to children outside playgrounds. The money will eventually be donated to the cause in my will, and in the meantime the prevalence of lung cancer in teens will incentivise the Free Market™ to find a cure.
Maybe this is superficial, but I was put off by the name "effective altruism". By naming it such, it felt like an implicit dig that most altruism is not effective. Like oh your altruism is not that productive, but my altruism, well it's effective altruism. Especially when that notion is pushed by people who overlap with the rationalist subculture, another arrogantly named group.
In general I find it concerning when technical people encroach upon non-technical areas without giving a sufficient amount of respect to the people who were in the field already. Those people, the experts, do know a lot and to have this kind of nerd savior branding of effective altruism, as if you are bringing the enlightenment that allows these poor altruists to finally be effective, well that just seems a little presumptuous.
Humans have devised many, many institutions and philosophies for helping each other grow into powerful moral people. Uncountable religions, self help movements, mind-body wellness systems, the list goes on and on.
Every single one, without exception, has examples of people at the top of the institution, widely regarded as masters of the system, who are miserable destructive human beings.
This makes me profoundly sad. I could understand if every single group of people had members who are toxic, but the fact that the selection systems for progression are so predictably bad...
IME, the best you can do is to find a small local scene that practices the system in a way that is healthy and nurturing for the members and live with the cognitive dissonance caused by the rot at the top. Whether it's your small neighborhood church, a martial arts dojo or your toastmasters club, these things can be great in the small even if they're awful in the large.
EA is a very very tempting horse to beat, to the point it's their own internal favorite sport. There are large quantities of ink spilled criticizing it, from inside and outside. But it is all pretty worthless without taking a look at the foundations of the movement.
To me, SBF embodied the worst stereotypes about the Effective Altruism movement. The not-so-subtle air of I'm-smarter-than-thou, the hubris to think whatever they have thought of haven't been thought and dismissed by others (probably for good reason), and a bull-headed focus on risk and return that can be exactly quantified, to the point of ignoring other, "softer" sides of the issue.
SBF isn't alone in this. In school, while interning for a competing hedge fund, I met a drove of Jane Street interns who thought exactly the same way, in terms of "EV" or expected value -- the mean outcome from a random event. For example, whenever we would go out for ice cream, they would try to flip a coin to see who pays, given the "expected cost" was same. This behavior is fine if you're a market maker betting 0.001% of your worth on a toss that you win 51% of the time, and I imagine this is why Jane Street tries to cultivate it. But unfortunately, in this hyperfocus on returns, they forgot that "risk"/"variance" exists, and sometimes flipping a coin will land you in such a deep hole that you can't continue to play the game long enough to dig yourself out of there.
I would consider "Effective Altruism" to be a fairly broad abstract attitude of trying to effectively help living beings (oneself & others), as guided by one's own opinions. Self-examination, critical thinking & debate are core values, and as far as I can tell continue to be so.
It's unfortunate that it's being defined by the rightly controversial opinions of famous people..
For example it indicates that longtermism is "a school of thinking within effective altruism". The popularity of such ideas will vary with time and they will be further refined with ongoing thinking & healthy debate. I for one fail to understand why it's important to maximise the number of future lives, which don't even exist - it seems very risky & unsustainable.
I used to work for one of the largest software vendors in the non-profit space worldwide and it fundamentally changed the way I look at the non-profit world. The big ones have essentially become these weird quasi-think tanks that passively influence policy or actively engage the community to achieve an often political agenda.
For example, a large non-profit can't explicitly lobby congress to ban sex education in schools. But, they can go around and "buy out" small non-profits into fulfilling their agenda. This is usually pretty easy to do because small non-profits are almost always cash-starved, and it can be really hard to say "no" when some outside figure is willing to provide a massive percentage of your operating budget in perpetuity in exchange for avoiding certain topics (like mutual consent) in schools.
This very scenario happened to an acquaintance of mine. She wanted to teach consent in schools after the #metoo scandal, spent 2 years grinding to improve access to consent-based communication for kids and then someone offered her obscene money under the condition that she make tweaks (i.e. "don't talk about consent in school because that will encourage children to have sex", that kind of stuff). She took the money because she saw a way out of needing hustle so hard for cash, but of course she sold her soul at the end and regretted it.
And none of those interactions ever get recorded. Nobody is tracking what big non-profits and charitable funds ask (or demand) others to do, and they get to market themselves as doing good for the world. I'm sure some of them are, and I'm sure there are tons of well-meaning people working for some of them, but that world was so much darker than I ever imagined, at least from the periphery.
Just the other day I was listening to this Dave Troy podcast "against long termism"[0] and it's the first time I ever heard the term "effective altruism", however within the last few days of having listened to EA has come up multiple times. I also remembered when listening to it that the title screen of the Baba Is You videogame on the nintendo switch promotes the "Giving what we can" charity. Very interesting stuff, I highly recommend checking out not only that podcast but the rest of the Oil, Gold, Crypto and Fascism series from Dave Troy.
Their quality-of-life points combined with long-term-ism reminds me of that simcity build that maximized the population to absurd levels. If you get 'points' for each life, past and future, than maximizing the number of future life is the easiest way to maximize your high score.
> Bostrom calculates that 10E29 potential lives are lost for every second that we fail to colonise the supercluster of galaxies containing the Milky Way.
I think an easy solution would be to normalize the score, as I think the sum of human hapiness would be "better" if there was only ever one human who gave his life a 10/10, than two humans who gave their life a 9/10.
Admittedly the SBF debacle is the first time I’ve heard of EA, but I don’t think it’s surprising that it’s attracted such characters.
Frankly it strikes me as just sketchy moral accounting, where you can be as greedy or exploitative as you want to be as long as you’re contributing to something that you can justify as having a higher “moral score”, but in the end you’re doing all the calculations and all the numbers are made up.
Does the effectiveness of various forms of altruism actually form a total order? I've yet to see any kind of rigorous argument in favour, yet seems like a fundamental requirement if one is going to expend such incredible resources into attempts to maximise this value.
The reason I ask is that, when I first discovered Effective Altruism, finding the premise to be deeply compelling, I ended up staying up all night watching videos trying to reconcile / assimilate it into my world view. In one video, various trolley problem scenarios were enumerated, and the EA guests were advocating sparing a lamborghini to the detriment of a child's life so that the car could be sold off and the money donated to saving multiple children in the developing world. Furthermore, there was some suggestion of a threshold where the altruism of sparing X domesticated animals was 'higher' than sparing Y humans, which definitely felt like it fell into [citation needed] territory.
> Nick Beckstead, the chief executive of the ftx Foundation, wrote in a PhD dissertation completed in 2013 that “It now seems more plausible to me that saving a life in a rich country is substantially more important than saving a life in a poor country, other things being equal.” Why? The former has the potential to create more long-term value and therefore save more lives.
Bringing racism and eugenics to a whole new "intellectual" and rationalist level. Sad that it took the swindling of $10-15 billions for all of this to come to light.
Someone's commented saying "EA was basically Judas Iscariot's method" and it got flagged. I think that statement was inaccurate. A motive like EA ("that oil could have been sold and the proceeds given to the poor") was the front for his real method, which in the Gospel is to sell out Jesus for money, so he can buy some land. His intentions were ultimately proven to not be altruistic. Knowing I'm guilty of some of this inappropriate judging myself, I think it's bad to judge people today who say they want to do EA by reaching for such a quick, easy "call them Judas" gambit.
This post does not have a concrete conclusion, it is just some thoughts I had while reading the article.
---
EA seems to have a core belief in post-scarcity. I've recently been thinking about post-scarcity. To my knowledge, I have not been engaging with EA content; I wonder if it has become pervasive because of their efforts, or if the time is ripe for the concept.
I've also been thinking about the number of different times that religion has evolved in a population. Religion fulfills various psychological needs; for a while it was popular for atheist thinkers to say we have evolved past the need for religion. I am agnostic, but I think people still need 'religion' and they will in the future. It may not be a religion with a personal god or gods; the concept of the divine comes from the human mind, the human mind will always have a place for godliness of some kind. I could be wrong, religion could be a blip, but I don't think so.
Religions of the past have reflected the tribalism inherent in the human condition. There has always been a fight with another tribe that was the most pressing. Other fights are suspended or suppressed until the main fight is concluded or paused. From my lifetime, the main fight was the Cold War until that concluded, and then it was national politics. It's very hard to come to terms with the core beliefs and morality of the main opponent, no matter who or when.
I'm sure EA says something about religion and tribalism, but the human experience is also important. The human experience starts in a place, with some people. Those people are your tribe, until you find another tribe. Their religion is your religion, until you find another religion.
One thing a lot of people miss is that effective altruism has worked! The idea of donating to distant causes that do the most good (which is what EA is and this notion long predates the 21st century) has massively reduced the number of deaths of malaria. Bill Gate's efforts to eradicate polio have done a world of good.
[+] [-] rosywoozlechan|3 years ago|reply
[+] [-] tptacek|3 years ago|reply
I've dug into this stuff over the last couple days, because I had previously understood Effective Altruism to be something akin to like a pledge drive for Charity Navigator's best-reviewed charities or something, and it turns out it's nothing of the sort. In fact, if you read some of the commentary from EA insiders about Sam Bankman-Fried (these come up in stories where EA distances itself from SBF, or claims they knew he was up to no good all along), it's hard to shake the comparisons to Scientology: they have their own weird language, there's a hierarchy, and people on the inside operate under a constant threat of ostracism.
To that, add that some of the ideas are clearly dementing. Take Scott Aaronson's recent post on SBF, which is shot through with EA-think; it includes this:
And a traditional investor who made billions on successful gambles, or arbitrage, or creating liquidity, then gave virtually all of it away to effective charities, would seem, on net, way ahead of most of us morally.
This is logic straight out of the 16th century Medici papacy of Leo X. Maybe it's some kind of horrible drug interaction with Rationalism, or maybe EA is fundamentally Rationalist? Either way: when your logic has established that you can obtain moral superiority by literally buying indulgences, it has officially proven too much, and if you can't recognize that, something has gone terribly wrong in your thinking. Over and over I keep coming across EA arguments that could be used to justify almost literally any behavior, so long as you can tell yourself a story about having a long-term positive goal.
Also: tens of millions in donations to "AI safety" organizations. Yes, to answer the Economist; this movement is irretrievable.
[+] [-] PebblesRox|3 years ago|reply
I have always associated EA with the rationalist movement, since I came across it via reading rationalist stuff.
If you define EA as "figuring out how to deploy a given set of resources so as to do the most good," your conclusions are going to depend on your value system: how do you define and measure good?
So when a bunch of rationalists who believe in consequentialism do that, they're going to maximize their value system and end up looking really weird to everyone else who doesn't share their values. You'd have the same outcome in a different flavor if a bunch of Christians did it.
I disagree with a bunch of the underlying values in the EA movement but I really admire the mindset that actually tries to solve the problem of "how can we be most effective with our giving?" instead of what I personally do, which is get overwhelmed by guilt and confusion, stop thinking about helping other people, and go back to my default of living my life mostly thinking about how to maximize my own comfort and preferences.
[+] [-] Tenoke|3 years ago|reply
Most EAs are people like me who read some essays, setup some recurring donations to GiveWell and call it a day. If that's how being a scientology member goes then it must be milder than I've been led to believe.
[+] [-] concordDance|3 years ago|reply
As someone with multiple EA friends: this is precisely what it is.
> they have their own weird language, there's a hierarchy, and people on the inside operate under a constant threat of ostracism.
The first is true, the latter two are bullshit.
> Either way: when your logic has established that you can obtain moral superiority by literally buying indulgences
That's a gross mischaracterization. A better metaphor is Robin Hood, someone who steals from the rich to give to the poor. And who is widely regarded as a folk hero.
> tens of millions in donations to "AI safety" organizations.
If AGI has even a 1% chance of happening this century and those donations reduce the risk of it going wrong and killing everyone by 1% that's still money very well spent.
[+] [-] bawolff|3 years ago|reply
> they have their own weird language, there's a hierarchy, and people on the inside operate under a constant threat of ostracism.
Most groups of people are like this to some extent. Most corporations are like this. Cultiness is a spectrum, and the comparison to scientology is a bit much as scientology does much worse things than that.
> Over and over I keep coming across EA arguments that could be used to justify almost literally any behavior, so long as you can tell yourself a story about having a long-term positive goal.
What you describe is a little too outcome based for my preferences, but you act as if outcome based morality isn't a common view. That viewing your moral standing as what the result of your actions are; how much happiness you have caused minus how much suffering is instrinsically unreasonable.
As far as "telling yourself a story" - that's how all morality works. There is no santa clause making a list of naughty or nice. You have to judge for yourself. Sure there are other metrics. I personally value intention, acting with honour, and ensuring the things you do can be universalized, but at the end of the day i am the one who has to verify of i a living up to my own moral precepts. If i wanted to delude myself into self-justifying something bad, it would probably be pretty easy as there is nobody but myself to stop me. I would even argue that is what being moral is: how resistant you are to petty self-justification.
[+] [-] martopix|3 years ago|reply
Please don't let the bad publicity around EA stop you from donating what you can, to charities that you trust and have a proven record of efficacy, whether from EA-related organization or other research. The original message still stands and should keep standing.
[+] [-] Joeboy|3 years ago|reply
Where that narrative gets problematic wrt SBF is that, aiui in the eyes of EA "leadership", "too many" EA individuals choose to allocate their donations to boring global health causes, rather than building the movement or weird long-termist stuff. So with that in mind, the obvious decision was to take the friendly billionaire money and put it into those "important" areas. Which means now, we have a popular and hard to refute characterization of EA as an organization that raises money for itself and weird causes normal people don't care about. But, like I said, the majority of EA money goes towards global health, ie. stopping people dying.
[+] [-] jasonwatkinspdx|3 years ago|reply
[+] [-] rcoveson|3 years ago|reply
I can understand having a negative opinion of the EA movement as it exists today. But your reaction to the charitable investor thing makes me think that you actually disagree with... altruism. Or at least the idea that being more altruistic is morally superior than being less altruistic.
[+] [-] c1ccccc1|3 years ago|reply
[+] [-] edanm|3 years ago|reply
I think you're missing the point of that paragraph, or at least misrepresenting it here.
It's not saying that what SBF did was ok or that fraud is ok or any such thing. It's specifically trying to say that just because this happened with crypto, which might be a bubble, doesn't make it any different than normal everyday finance. You might or might not agree with that idea, or might or might agree that finance itself is good, but that's the point of this paragraph specifically. Here it is in full, note the first sentence:
> Even if cryptocurrency remains just a modern-day tulip bulb or Beanie Baby, though, it seems morally hard to distinguish a cryptocurrency trader from the millions who deal in options, bonds, and all manner of other speculative assets. And a traditional investor who made billions on successful gambles, or arbitrage, or creating liquidity, then gave virtually all of it away to effective charities, would seem, on net, way ahead of most of us morally.
Also, just a note on this:
> Either way: when your logic has established that you can obtain moral superiority by literally buying indulgences, it has officially proven too much,
The problem with indulgences wasn't just the idea of being able to offset moral harm with moral good. Firstly, that position itself is at odds with what most EA people say afaict.
Secondly, the problem of indulgences is also that they are not real. I think even a religious person will agree that the indulgences of the 16th century was just a con. An atheist will dismiss the whole idea entirely.
The underlying logic of someone who made and gave away billions being overall ahead of other people morally is sound, IMHO. That of course is not to say that this then "allows" them to go and do some evil or something - that's not at all what the EA movement is saying.
[+] [-] thaumasiotes|3 years ago|reply
Depends what you mean by Rationalist. I would say that EA, the modern movement, is a hobby of people who belong to "rationalism", the modern movement. I'd be fairly comfortable calling them a cult. (Frequent exhortation: "Read the Sequences!") EA is the same thing, simply because the people who do EA and the people who do rationalism are the same people.
But from your comment, I can't tell whether Rationalist refers to that group or to some other thing.
[+] [-] loeg|3 years ago|reply
I mean, I think you were basically right before, and now you're over-reading a lot from an awkward sentence fragment. I agree that the phrase and implied comparison is garbage.
It seems kind of like hearing MTG talk about the Gazpacho Police or whatever and subsequently ascribing it to the GOP. Yes, the quote is clearly inane. Yes, she is a person of some influence in the GOP; no, that doesn't mean everyone in the GOP believes what MTG is saying. I guess you could say the GOP is a cult, but I think maybe cult doesn't mean anything if it applies to 40% of the US population.
Anyway, I totally agree as far as "AI safety." And fuck SBF and steal-to-give.
> Yes, to answer the Economist; this movement is irretrievable.
What name would you give to a movement to point charitable dollars at, say, the front page of GiveWell? (Givewell is more or less what EA means to me.)
[+] [-] coldtea|3 years ago|reply
Sure sounds like a cult in practice, especially in the whole context ( https://aiascendant.substack.com/p/extropias-children-chapte... the other 6 parts of the series are great reading too).
[+] [-] naasking|3 years ago|reply
Buying indulgences was wrong because it was used to cover up moral wrongdoing or reduce ones penance for wrongdoing, not because the concept of money buying moral praise is intrinsically false or corrupt. It seems pretty clear from Scott's example that this is not the scenario he was describing.
Phrased differently, all else being equal, a person that contributed billions to charities deserves more moral praise than one who didn't or can't. Edit: and if you don't think so, then why do we give such people awards and recognition?
This won't be convincing to everyone of course, and I expect the dividing line will be whether you think utilitarianism is the correct way to approach ethics. Of course if you don't, then you won't subscribe to EA anyway.
[+] [-] thegeomaster|3 years ago|reply
The EA forum in fact uses the forum software developed by LessWrong, in very close collaboration with them (it's not open source or anything). In general it looks like the two organizations have a lot in common, and LessWrong is a poster child for the rationality movement.
[+] [-] bondarchuk|3 years ago|reply
- Outcome matters more than the intention of the person trying to do a good act? Absurd; here, I can even prove it with an absurd example!
- Trying, at all, to build a framework for good and bad different from that handed down to us? How dare they.
- Artificial general intelligence? Pointless to worry about, only humans can be really intelligent because [strikethrough]they have a soul[/strikethrough] there's something about consciousness that we don't yet understand.
[+] [-] Balgair|3 years ago|reply
Aside: A good rule of thumb for when you should start getting out of something is when the group comes up with a cute little name for those outside the group, typically ending in 'ie'. Normies, heggies, fundies, libtards, etc. It starts out innocent enough, but when those artificial divisions spring up, that's when the brains start shutting off and people start preaching to the choir.
[+] [-] rhn_mk1|3 years ago|reply
I see it as a natural outcome of consequentialism. Your ability to do good things is tied to your circumstances, which are not the same for everyone. Are you saying that you're certain that consequentialism is an invalid way to asess morality?
[+] [-] wolverine876|3 years ago|reply
Let's stop being surprised by the self-serving rationalizations of (many) tech leaders. Expect it, stop trusting it by default, stop taking them seriously.
Or try this hypothesis: Whatever the issue, their theory (EA or whatever else) will always benefit them. Find a more predictive hypothesis if you can.
[+] [-] synu|3 years ago|reply
[+] [-] no_identd|3 years ago|reply
[+] [-] devilsbabe|3 years ago|reply
[+] [-] ZeroGravitas|3 years ago|reply
Maybe I've just missed it, but EA feels a lot like Communism if it was invented by Any Rand to me, so governments, democracy and regulation have always seemed notable by their absence.
[+] [-] armchairhacker|3 years ago|reply
The issue is when people take it to the extreme (like someone who literally argued that a person doing mere charity or volunteer work is bad, because they are wasting time and effort that they could've spent making money and recruiting to contribute more charity and volunteering), or just sidestepping "altruism" entirely and using the name and community as a stepping stone to push your unrelated scam.
I don't know whether or not SBF genuinely believed in effective altruism, but if he did, he let the "principles" override common sense.
[+] [-] AnIdiotOnTheNet|3 years ago|reply
This is the part of EA that triggers a lot of red flags for me. Making money, or rather making significant money, necessitates participation in a system of exploitation that is largely responsible for the ills the charitable donations are meant to alleviate in the first place. Maximizing that money necessitates optimizing the exploitation.
the whole thing just seems like an exercise in morality-washing one's own greed. Lies, damn lies, and statistics perpetrated with "rationality".
[+] [-] mtrycz2|3 years ago|reply
This is the kind of vibe I get from this writeup, and the level of separation from humanity these people operate.
[+] [-] wombatpm|3 years ago|reply
[+] [-] yanderekko|3 years ago|reply
I mean, the concern here is that coming off as an asshole can be self-defeating, not that these views are actually incorrect, right? Seems like it's like being an atheist - an "extreme view" for many and it's a topic that may need to be dealt with with caution sometimes but that's really mostly a critique of the sensitivities of others.
[+] [-] emmanueloga_|3 years ago|reply
Seems like a good idea, if only just to have some sort of open-system for rating charities. I suspect the issue is everything that comes around the idea, specially... people. The problem is always people :-). Perhaps a "Michelin Guide to Charities" would be easier to market and explain? :-p
My favorite explanation of how "effective altruism" works is this SMBC comic [2]. :-)
--
1: https://en.wikipedia.org/wiki/Effective_altruism
2: https://www.smbc-comics.com/comic/2011-07-13
[+] [-] cscurmudgeon|3 years ago|reply
It is just utilitarianism with modern marketing and pure utilitarianism has problems.
See one example below where utilitarianism's suggestion goes against what almost everyone considers ethical:
You can take 10 healthy organs from one person without any social connections and use the organs to save 10 terminal patients who would otherwise die.
EA effectively sanctions robbing a million people to help a million people + 1
[+] [-] nl|3 years ago|reply
[+] [-] Devasta|3 years ago|reply
No need to thank me, really.
[+] [-] hardwaregeek|3 years ago|reply
In general I find it concerning when technical people encroach upon non-technical areas without giving a sufficient amount of respect to the people who were in the field already. Those people, the experts, do know a lot and to have this kind of nerd savior branding of effective altruism, as if you are bringing the enlightenment that allows these poor altruists to finally be effective, well that just seems a little presumptuous.
[+] [-] seanc|3 years ago|reply
Every single one, without exception, has examples of people at the top of the institution, widely regarded as masters of the system, who are miserable destructive human beings.
This makes me profoundly sad. I could understand if every single group of people had members who are toxic, but the fact that the selection systems for progression are so predictably bad...
IME, the best you can do is to find a small local scene that practices the system in a way that is healthy and nurturing for the members and live with the cognitive dissonance caused by the rot at the top. Whether it's your small neighborhood church, a martial arts dojo or your toastmasters club, these things can be great in the small even if they're awful in the large.
[+] [-] radu_floricica|3 years ago|reply
EA is a very very tempting horse to beat, to the point it's their own internal favorite sport. There are large quantities of ink spilled criticizing it, from inside and outside. But it is all pretty worthless without taking a look at the foundations of the movement.
[+] [-] polygamous_bat|3 years ago|reply
SBF isn't alone in this. In school, while interning for a competing hedge fund, I met a drove of Jane Street interns who thought exactly the same way, in terms of "EV" or expected value -- the mean outcome from a random event. For example, whenever we would go out for ice cream, they would try to flip a coin to see who pays, given the "expected cost" was same. This behavior is fine if you're a market maker betting 0.001% of your worth on a toss that you win 51% of the time, and I imagine this is why Jane Street tries to cultivate it. But unfortunately, in this hyperfocus on returns, they forgot that "risk"/"variance" exists, and sometimes flipping a coin will land you in such a deep hole that you can't continue to play the game long enough to dig yourself out of there.
You can read some thoughts by the man himself here, which flies in the face of all such work (Kelly bets, etc.) in variance minimization: https://nitter.net/SBF_FTX/status/1337250686870831107
[+] [-] metta2uall|3 years ago|reply
It's unfortunate that it's being defined by the rightly controversial opinions of famous people..
To balance this article, consider checking out one of the main EA websites https://www.effectivealtruism.org/articles/introduction-to-e...
For example it indicates that longtermism is "a school of thinking within effective altruism". The popularity of such ideas will vary with time and they will be further refined with ongoing thinking & healthy debate. I for one fail to understand why it's important to maximise the number of future lives, which don't even exist - it seems very risky & unsustainable.
[+] [-] thr0wawayf00|3 years ago|reply
For example, a large non-profit can't explicitly lobby congress to ban sex education in schools. But, they can go around and "buy out" small non-profits into fulfilling their agenda. This is usually pretty easy to do because small non-profits are almost always cash-starved, and it can be really hard to say "no" when some outside figure is willing to provide a massive percentage of your operating budget in perpetuity in exchange for avoiding certain topics (like mutual consent) in schools.
This very scenario happened to an acquaintance of mine. She wanted to teach consent in schools after the #metoo scandal, spent 2 years grinding to improve access to consent-based communication for kids and then someone offered her obscene money under the condition that she make tweaks (i.e. "don't talk about consent in school because that will encourage children to have sex", that kind of stuff). She took the money because she saw a way out of needing hustle so hard for cash, but of course she sold her soul at the end and regretted it.
And none of those interactions ever get recorded. Nobody is tracking what big non-profits and charitable funds ask (or demand) others to do, and they get to market themselves as doing good for the world. I'm sure some of them are, and I'm sure there are tons of well-meaning people working for some of them, but that world was so much darker than I ever imagined, at least from the periphery.
[+] [-] dools|3 years ago|reply
[0] https://podcasts.apple.com/au/podcast/against-longtermism-wi...
[+] [-] navane|3 years ago|reply
> Bostrom calculates that 10E29 potential lives are lost for every second that we fail to colonise the supercluster of galaxies containing the Milky Way.
I think an easy solution would be to normalize the score, as I think the sum of human hapiness would be "better" if there was only ever one human who gave his life a 10/10, than two humans who gave their life a 9/10.
[+] [-] Bukhmanizer|3 years ago|reply
Frankly it strikes me as just sketchy moral accounting, where you can be as greedy or exploitative as you want to be as long as you’re contributing to something that you can justify as having a higher “moral score”, but in the end you’re doing all the calculations and all the numbers are made up.
[+] [-] ShroudedNight|3 years ago|reply
The reason I ask is that, when I first discovered Effective Altruism, finding the premise to be deeply compelling, I ended up staying up all night watching videos trying to reconcile / assimilate it into my world view. In one video, various trolley problem scenarios were enumerated, and the EA guests were advocating sparing a lamborghini to the detriment of a child's life so that the car could be sold off and the money donated to saving multiple children in the developing world. Furthermore, there was some suggestion of a threshold where the altruism of sparing X domesticated animals was 'higher' than sparing Y humans, which definitely felt like it fell into [citation needed] territory.
[+] [-] paganel|3 years ago|reply
Bringing racism and eugenics to a whole new "intellectual" and rationalist level. Sad that it took the swindling of $10-15 billions for all of this to come to light.
[+] [-] JetAlone|3 years ago|reply
[+] [-] jopsen|3 years ago|reply
Judging by the article and comments, it looks as if people are living by a creed to be effectively altruistic.
In general any philosophy where you think there is an optimal answer suffers from the risk that the answer could be wrong.
In the case of givewell, one limitation is that you only support causes that demonstrate impact -- you don't support causes that take risks.
I'm sure there is many other reasons not think that everything can be solved through the application of logic :)
[+] [-] csours|3 years ago|reply
---
EA seems to have a core belief in post-scarcity. I've recently been thinking about post-scarcity. To my knowledge, I have not been engaging with EA content; I wonder if it has become pervasive because of their efforts, or if the time is ripe for the concept.
I've also been thinking about the number of different times that religion has evolved in a population. Religion fulfills various psychological needs; for a while it was popular for atheist thinkers to say we have evolved past the need for religion. I am agnostic, but I think people still need 'religion' and they will in the future. It may not be a religion with a personal god or gods; the concept of the divine comes from the human mind, the human mind will always have a place for godliness of some kind. I could be wrong, religion could be a blip, but I don't think so.
Religions of the past have reflected the tribalism inherent in the human condition. There has always been a fight with another tribe that was the most pressing. Other fights are suspended or suppressed until the main fight is concluded or paused. From my lifetime, the main fight was the Cold War until that concluded, and then it was national politics. It's very hard to come to terms with the core beliefs and morality of the main opponent, no matter who or when.
I'm sure EA says something about religion and tribalism, but the human experience is also important. The human experience starts in a place, with some people. Those people are your tribe, until you find another tribe. Their religion is your religion, until you find another religion.
[+] [-] concordDance|3 years ago|reply
[+] [-] ShredKazoo|3 years ago|reply
Seems like it was more gross mismanagement and excessive stimulant consumption.