EA sounds rational and wonderful. And it does make sense. We should follow our logic to its rational conclusions. Our moral intuitions are obviously wrong a lot - just look at the trolley problem. With reason we can do better.
The problem is that it quickly becomes an invitation to ideas like longtermism. Which involve long chains of potentially flawed reasoning, leading to the belief that you're doing tremendous good. And with confirmation bias making it hard for you to doubt your logic, leading to an unbounded potential for error.
As the old moral goes, "Nobody is as easy to fool as a person who wants to fool himself."
This problem is not original to EA. The history of the 20th century is full of potential utopias. On the basis of the end justifies the means, the prospect of infinite good justifies unlimited harm. Unlimited harm came in the form of wars, famines, and mass repression. But the utopian futures never materialized.
That said, there is a lot of good to the idea of EA. It is better to do something effective than to virtue signal. But we should also be biased towards wins we can be more sure are real. Things that are short term and concrete. The more distant and hard to measure the win, the more that we should bias ourselves to the belief that we're missing something.
It's the old debate between rationalism and empiricism again.
"Rational" is a dangerous word. On the surface, it sounds like "smart". But if you take rationalism to the extreme, it becomes epistemological opposition to evidence. You build mental models and make logical conclusions without considering if the conclusions are also valid in the real world.
Scientific worldview is closer to empiricism than rationalism. You start by assuming that your mental models are wrong. They may still be useful, but you have to make observations and experiments and consider the evidence to determine that.
Effective altruism is a useful concept. It only becomes problematic once you get too deep into rationalism. The effectiveness of your altruism is fundamentally an empirical question, and it should be answered by empirical means rather than by reasoning.
> The problem is that it quickly becomes an invitation to ideas like longtermism. Which involve long chains of potentially flawed reasoning, leading to the belief that you're doing tremendous good. And with confirmation bias making it hard for you to doubt your logic, leading to an unbounded potential for error.
We haven't been reading the same article I think for you've got an extremely gentle take on this.
TFA specifically how the "poster boy" of the EA movement bought its way into having a best-seller by using $10m of stolen money gifted by FTX to propel his book ("based on marketing, not merit" according to TFA) and how that same person bought a $16m 15th century mansion in the UK using stolen FTX money.
All the while they were presenting SBF as an "altruistic genius driving a Toyota Corolla" while they fully knew he was living an ultra-lavish lifestyle.
It's now a fact that SBF is a criminal and the EA movement still hasn't given the ill-acquired donations from SBF back.
I hope that John J. Ray III (the person who was in charge of the EA liquidation and now the FTX liquidation) goes after that mansion in the UK and manages to claw money back from the EA movement. There's hope though as 15 years after the Enron scam money was still clawed back.
People love to make quasi-religions by introducing flawed premises or arguments early in the reasoning process and then proceeding to build a massive superstructure on top of that. People see this gigantic tower of stuff builded on flawed premises and assume it must be legit.
A clever religion founder probably engineered the thing from end to beginning where they found the conclusion they wanted and then worked their way back through their argument to see where they could sneak in a flawed premise or argument to make it work.
The philosophical question is, what's the discount rate on moral decisions? Is saving 2 lives in 10 years better than saving one life now? It's the trolley problem over time. What should that number be? And who gets to set it?
Optimal values for young people are higher than those for old people.
The problem with "effective altruism" is much simpler. Most of the people behind it were crooks.
The author of the article isn't playing fair. He opens with SBF's downfall and then says even without that, there are so many reason EA is suspect. But then he keeps reintroducing SBF into the narrative, and conflates the billions that SBF swindled with the millions he gave to EA, shading it to seem that the total amount was dirty EA money.
He also says things like, "leading EAs were spending large sums of money on Oxfordshire palaces". One is bad enough and I don't fully trust his reporting on it when he blithely claims that that multiple palaces were bought.
I've recommended "The Life You Can Save" to multiple people because the book had a great impact on me. Most of the things being claimed critics make of EA sound completely alien to me, such as justifying getting filthy rich because at some point you will be giving half of it away. I'm sure people use EA in that way, but does EA promote that view? In the same way, the eugenics movement try to hijack the theory of evolution to justify behavior that the theory itself says nothing about.
The article is focused almost entirely on personal drama and no attempt is really made to refute the central tenets of EA.
I think a lot of people find the concept of EA morally threatening. Their natural reaction is to want to "take those people down a peg" because they perceive EA as an incursion on the moral high ground. I think this tendency is more pronounced among the left. Rather than discuss "how can we do good and get the best bang for buck and is this movement doing that?" they'll focus on people they think have illegitimately gained status and pillory them.
I think the EA "brand" should be more careful to avoid this very natural "crab bucket" type of moral backlash. IMHO a good start would be to de-emphasise "high flyers" and direct focus towards the many unpretentious people who do their best to do "a little good".
It's a wonder that obnoxious hit pieces like this still get made in this day and age. The first paragraph is filled with disingenuous strawman rhetoric.
> colonize space, plunder the vast resources of the cosmos
The author obviously tries to draw a parallel between inter-earth colonization and plundering to make longtermism and by proxy AE look bad.
I'm not an EA but I've never met people more receptive to criticism as they are. This is a group of people, uniting around a desire to do good, actually going through with it, and somehow catching a huge amount of flak for it.
Think of all the poor unsuspecting stars! Have we not drained enough heat from our own already? When will plants learn to stop appropriating energy that isn't theirs to take?
I keep seeing takes like this, but the effect EA has had on my life so far is that it gave me motivation and an easy framework to donate thousands of dollars a year to fund deworming, vaccination, and other direct relief in underdeveloped countries, for two years in a row. I honestly had no idea who SBF was until FTX melted down. I saw zero connection between EA and Musk, Trump, etc.
Was I duped? I don't think so. SBF's downfall has definitely shaken my confidence in EA as a trustworthy institution, but I still generally feel great about those donations and will likely repeat them again next year (albeit with a closer look at exactly how the funds are distributed).
As with many things, it easier and more fun to disparage movements than it is to get involved and make positive change. This article is a good example of that.
Glad it's worked out for you. But if you had been inspired to take charitable action by, say, being born again in evangelical christianity, should people abstain from pointing out the problems with that institution? And does every article critical of such an institution need to be tempered with praise to satisfy those who have not experienced its dark side or who disagree with the premise?
Speaking as somebody who isnt into EA, I assure you most of the people criticizing EA havent donated a dime or spent a single minute volunteering. Everybody talks the talk about how to charity but nobody walks the walk
What happens in the EA is pretty much exactly a mirror of what happens in churches. The good and the bad. I hope it will serve to illustrate how we all are human and will react similarly in similar environments.
You do you, and please don’t take this as a complaint about altruism per se. But making one feel good is often considered the most important (and most effective) part of effective altruism.
> ... but the effect EA has had on my life so far is that it gave me motivation and an easy framework to donate thousands of dollars a year to fund deworming, vaccination, and other direct relief in underdeveloped countries
TFA literally explains how a $18m donation with stolen money was used by the EA movement to buy a lavish mansion.
TFA also explains of the EA movement was lauding SBF for driving in a beaten up Toyota Corolla even though they fully knew he was living in a $40m luxury mansion in the Bahamas while flying private.
What makes you think most of the money you donate to such gurus actually end up to charitable causes ?
If you want to donate, donate directly to charitable causes instead of donating to obvious charlatans.
The EA movement tarnished its reputation by being accomplice in defrauding people's money in the FTX scam.
They played the "SBF is an altruistic genius driving a Toyota Corolla" card while they knew it wasn't true and people fell for it.
Turns out: there was no altruistic genius. And that's a decision of justice: guilty on seven criminal counts.
Maybe "EA" should be renamed "EC": "Effective Criminals"?
Prior to ~2021 or so, I felt that many if not most in the EA community not only wanted to donate their money in the manner that most effectively saved lives, but also donate as much of their money as they could while still avoiding relative poverty. So many were extremely frugal, living in small homes or apartments, donating as much money as they could afford to purchase mosquito nets, vaccines, and deworming medication for others, as well as many donating one of their kidneys to strangers, all things I think most would find commendable.
Billionaires who live luxurious lifestyles and donate to prevent AI catastrophes was not something I was aware of at all prior to ~2021. I'm not sure how that ever became part of the movement.
In fact, longtermism in general wasn't something I was aware of until sometime around then. Purchasing mosquito nets, vaccines, and deworming medication has a demonstrable impact for saving lives and objectively reduces human suffering. In contrast, while AI, nuclear warfare, etc. can indeed kill billions, donating to those charities has absolutely no demonstrable impact. It's possible it has an impact, but it is near impossible to prove.
GiveWell states the following:
> GiveWell evaluates potential top charities along four main criteria: (1) Effectiveness, as supported by evidence the program the charity is implementing saves or improves lives; (2) cost-effectiveness, or how much ‘bang for the buck’ the charity offers in terms of lives saved or improved per donation; (3) room for more funding, or the charity’s ability to put additional donations to use; and (4) transparency.
So certain charities likely do save lives, but would never be recommended by GiveWell because they aren't transparent enough to verify that. There has to be concrete proof in order to get a recommendation, which is a perfectly reasonable decision on their part, and something I thought was the felt among all in the EA community until I heard some advocating for charities to reduce the chances of AI catastrophes and nuclear warfare, but those people always felt like a fringe part of the community until they started getting more news coverage.
I'm not condemning longtermism outright, I think those who with the appropriate skills should work towards reducing the likelihood of nuclear warfare and AI induced disasters, but I don't see how it could possibly be considered "effective" altruism to donate money to charities tackling those issues considering we have not the slightest clue as to whether it's effective.
In a world where people die of preventable and curable diseases because governments choose not to spend more (or more effectively) on foreign aid, I absolutely believe it is critical that people take responsibility for saving those that they can. If I see someone drowning at the beach, I'm not going to say "Well I'm not going to save them because the government didn't pay for lifeguards." Sure, they probably should have, but at the end of the day someone is living or dying based on my decision: I'm going to try to save them. The same is true for those dying of malaria, neglected tropical diseases and malnutrition: yes, I believe governments should take responsibility for those issues, but they haven't, and if I don't donate what I can, more people will die. Hundreds of thousands die each year from malaria and NTDs and millions from malnutrition. While I'm not remotely capable of solving those issues, I absolutely am capable of saving some of those people, which certainly has an enormous impact on those I can save (also, while hundreds of thousands die from malaria each year, millions suffer from it but survive, so not only can reduce death but also pain.) I think trying to get more people to donate to charities that are demonstrably effective at saving lives is extremely important. With the "Effective Altruism" movement now including stuff like longtermism, as well as having its reputation ruined, I'm not sure how advocacy should move forward. On one hand, I don't think it'd ever be good for it to be an ideology: one should evaluate individual concepts by themselves, not tie them into some larger ideology and then decide based on one's personal feelings towards that ideology whether to support it. Still, it's helpful to have terms to describe concepts in fewer words, as well as have tangible goals for advocacy, as the more people that donate to charities that effectively save lives, the more lives that are saved overall.
[+] [-] RationalDino|2 years ago|reply
The problem is that it quickly becomes an invitation to ideas like longtermism. Which involve long chains of potentially flawed reasoning, leading to the belief that you're doing tremendous good. And with confirmation bias making it hard for you to doubt your logic, leading to an unbounded potential for error.
As the old moral goes, "Nobody is as easy to fool as a person who wants to fool himself."
This problem is not original to EA. The history of the 20th century is full of potential utopias. On the basis of the end justifies the means, the prospect of infinite good justifies unlimited harm. Unlimited harm came in the form of wars, famines, and mass repression. But the utopian futures never materialized.
That said, there is a lot of good to the idea of EA. It is better to do something effective than to virtue signal. But we should also be biased towards wins we can be more sure are real. Things that are short term and concrete. The more distant and hard to measure the win, the more that we should bias ourselves to the belief that we're missing something.
[+] [-] jltsiren|2 years ago|reply
"Rational" is a dangerous word. On the surface, it sounds like "smart". But if you take rationalism to the extreme, it becomes epistemological opposition to evidence. You build mental models and make logical conclusions without considering if the conclusions are also valid in the real world.
Scientific worldview is closer to empiricism than rationalism. You start by assuming that your mental models are wrong. They may still be useful, but you have to make observations and experiments and consider the evidence to determine that.
Effective altruism is a useful concept. It only becomes problematic once you get too deep into rationalism. The effectiveness of your altruism is fundamentally an empirical question, and it should be answered by empirical means rather than by reasoning.
[+] [-] TacticalCoder|2 years ago|reply
We haven't been reading the same article I think for you've got an extremely gentle take on this.
TFA specifically how the "poster boy" of the EA movement bought its way into having a best-seller by using $10m of stolen money gifted by FTX to propel his book ("based on marketing, not merit" according to TFA) and how that same person bought a $16m 15th century mansion in the UK using stolen FTX money.
All the while they were presenting SBF as an "altruistic genius driving a Toyota Corolla" while they fully knew he was living an ultra-lavish lifestyle.
It's now a fact that SBF is a criminal and the EA movement still hasn't given the ill-acquired donations from SBF back.
I hope that John J. Ray III (the person who was in charge of the EA liquidation and now the FTX liquidation) goes after that mansion in the UK and manages to claw money back from the EA movement. There's hope though as 15 years after the Enron scam money was still clawed back.
[+] [-] narrator|2 years ago|reply
A clever religion founder probably engineered the thing from end to beginning where they found the conclusion they wanted and then worked their way back through their argument to see where they could sneak in a flawed premise or argument to make it work.
[+] [-] 082349872349872|2 years ago|reply
known for ages* by the phrase "the end does not justify the means"
* exitus acta numquam probat
[+] [-] throwaway3d3L7|2 years ago|reply
I don't understand what I'm supposed to see in it. Could you clarify?
[+] [-] Animats|2 years ago|reply
The problem with "effective altruism" is much simpler. Most of the people behind it were crooks.
[+] [-] jjoonathan|2 years ago|reply
[+] [-] tasty_freeze|2 years ago|reply
He also says things like, "leading EAs were spending large sums of money on Oxfordshire palaces". One is bad enough and I don't fully trust his reporting on it when he blithely claims that that multiple palaces were bought.
I've recommended "The Life You Can Save" to multiple people because the book had a great impact on me. Most of the things being claimed critics make of EA sound completely alien to me, such as justifying getting filthy rich because at some point you will be giving half of it away. I'm sure people use EA in that way, but does EA promote that view? In the same way, the eugenics movement try to hijack the theory of evolution to justify behavior that the theory itself says nothing about.
[+] [-] xyzzy123|2 years ago|reply
I think a lot of people find the concept of EA morally threatening. Their natural reaction is to want to "take those people down a peg" because they perceive EA as an incursion on the moral high ground. I think this tendency is more pronounced among the left. Rather than discuss "how can we do good and get the best bang for buck and is this movement doing that?" they'll focus on people they think have illegitimately gained status and pillory them.
I think the EA "brand" should be more careful to avoid this very natural "crab bucket" type of moral backlash. IMHO a good start would be to de-emphasise "high flyers" and direct focus towards the many unpretentious people who do their best to do "a little good".
[+] [-] rutierut|2 years ago|reply
> colonize space, plunder the vast resources of the cosmos
The author obviously tries to draw a parallel between inter-earth colonization and plundering to make longtermism and by proxy AE look bad.
I'm not an EA but I've never met people more receptive to criticism as they are. This is a group of people, uniting around a desire to do good, actually going through with it, and somehow catching a huge amount of flak for it.
[+] [-] HumanReadable|2 years ago|reply
[+] [-] kdmccormick|2 years ago|reply
Was I duped? I don't think so. SBF's downfall has definitely shaken my confidence in EA as a trustworthy institution, but I still generally feel great about those donations and will likely repeat them again next year (albeit with a closer look at exactly how the funds are distributed).
As with many things, it easier and more fun to disparage movements than it is to get involved and make positive change. This article is a good example of that.
[+] [-] devindotcom|2 years ago|reply
[+] [-] mentalpiracy|2 years ago|reply
Effective Altruists hijacked their original, not-terrible idea into a cult.
[+] [-] lispisok|2 years ago|reply
[+] [-] actionfromafar|2 years ago|reply
[+] [-] mtsr|2 years ago|reply
[+] [-] TacticalCoder|2 years ago|reply
TFA literally explains how a $18m donation with stolen money was used by the EA movement to buy a lavish mansion.
TFA also explains of the EA movement was lauding SBF for driving in a beaten up Toyota Corolla even though they fully knew he was living in a $40m luxury mansion in the Bahamas while flying private.
What makes you think most of the money you donate to such gurus actually end up to charitable causes ?
If you want to donate, donate directly to charitable causes instead of donating to obvious charlatans.
The EA movement tarnished its reputation by being accomplice in defrauding people's money in the FTX scam.
They played the "SBF is an altruistic genius driving a Toyota Corolla" card while they knew it wasn't true and people fell for it.
Turns out: there was no altruistic genius. And that's a decision of justice: guilty on seven criminal counts.
Maybe "EA" should be renamed "EC": "Effective Criminals"?
[+] [-] johnfernow|2 years ago|reply
Billionaires who live luxurious lifestyles and donate to prevent AI catastrophes was not something I was aware of at all prior to ~2021. I'm not sure how that ever became part of the movement.
In fact, longtermism in general wasn't something I was aware of until sometime around then. Purchasing mosquito nets, vaccines, and deworming medication has a demonstrable impact for saving lives and objectively reduces human suffering. In contrast, while AI, nuclear warfare, etc. can indeed kill billions, donating to those charities has absolutely no demonstrable impact. It's possible it has an impact, but it is near impossible to prove.
GiveWell states the following:
> GiveWell evaluates potential top charities along four main criteria: (1) Effectiveness, as supported by evidence the program the charity is implementing saves or improves lives; (2) cost-effectiveness, or how much ‘bang for the buck’ the charity offers in terms of lives saved or improved per donation; (3) room for more funding, or the charity’s ability to put additional donations to use; and (4) transparency.
So certain charities likely do save lives, but would never be recommended by GiveWell because they aren't transparent enough to verify that. There has to be concrete proof in order to get a recommendation, which is a perfectly reasonable decision on their part, and something I thought was the felt among all in the EA community until I heard some advocating for charities to reduce the chances of AI catastrophes and nuclear warfare, but those people always felt like a fringe part of the community until they started getting more news coverage.
I'm not condemning longtermism outright, I think those who with the appropriate skills should work towards reducing the likelihood of nuclear warfare and AI induced disasters, but I don't see how it could possibly be considered "effective" altruism to donate money to charities tackling those issues considering we have not the slightest clue as to whether it's effective.
In a world where people die of preventable and curable diseases because governments choose not to spend more (or more effectively) on foreign aid, I absolutely believe it is critical that people take responsibility for saving those that they can. If I see someone drowning at the beach, I'm not going to say "Well I'm not going to save them because the government didn't pay for lifeguards." Sure, they probably should have, but at the end of the day someone is living or dying based on my decision: I'm going to try to save them. The same is true for those dying of malaria, neglected tropical diseases and malnutrition: yes, I believe governments should take responsibility for those issues, but they haven't, and if I don't donate what I can, more people will die. Hundreds of thousands die each year from malaria and NTDs and millions from malnutrition. While I'm not remotely capable of solving those issues, I absolutely am capable of saving some of those people, which certainly has an enormous impact on those I can save (also, while hundreds of thousands die from malaria each year, millions suffer from it but survive, so not only can reduce death but also pain.) I think trying to get more people to donate to charities that are demonstrably effective at saving lives is extremely important. With the "Effective Altruism" movement now including stuff like longtermism, as well as having its reputation ruined, I'm not sure how advocacy should move forward. On one hand, I don't think it'd ever be good for it to be an ideology: one should evaluate individual concepts by themselves, not tie them into some larger ideology and then decide based on one's personal feelings towards that ideology whether to support it. Still, it's helpful to have terms to describe concepts in fewer words, as well as have tangible goals for advocacy, as the more people that donate to charities that effectively save lives, the more lives that are saved overall.
[+] [-] aaron695|2 years ago|reply
[deleted]