> Effective altruism is a philosophical and social movement that advocates "using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis
I'm not sure how anyone could argue that what SBF was doing fits it any way with that. He's just been found guilty of fraud on multiple counts, so clearly the whole "Effective altruism" was just a image he was trying to present, while acting completely against it in private.
Yes, he admitted multiple times that it was all just an act for public perception. Can't get much more clear than this:
KP: you were really good at talking about ethics, for someone who kind of saw it all as a game with winners and losers
SBF: ya
SBF: hehe
SBF: I had to be
SBF: it's what reputations are made of, to some extent
SBF: I feel bad for those who get f***** by it
SBF: by this dumb game we woke westerners play where we say all the right shiboleths [sic] and so everyone likes us
> I'm not sure how anyone could argue that what SBF was doing fits it any way with that.
That's pretty easy, actually. He seems to see himself as some kind of Robin Hood-esque figure, taking money from the rich and distributing it to the poor (or, more specifically, distributing the money of the rich to the places where its most effective).
The argument is about whether EA naturally leads to this "the ends superseed the means"-type of actions, which is what the article argues.
It's actually very common for companies doing terrible things like fraud, environmental damage, bait and switches, patent abuse etc. to lean hard on promoting social work they're doing.
This is so common that short sellers like Kyle Bass actually look for it as an additional red flag when hunting down fraudulent companies.
SBF cannot even manage "ineffective" altruism. He puts other people through hell without a hint of empathy. How would anyone expect he could do "effective" altruism. In his own words, he "fucked up". But he is not sorry, he offers no apology, he's "not guilty". He is a compulsive liar.
It will be interesting how he fares in prison. Martin Shkrelli wants to be his friend. Perhaps they can be pen pals. Maybe he'll get special treatment or early release.
Even if SBF was following the EA playbook to the letter, it did not work. The attempt was ineffective. He failed.
Michael Lewis shared some SBF story where the scenario (the bet) presented to SBF was a chance to make the world better or, if it failed, the entire world would be destroyed. SBF allegedly would take that bet. All or nothing. Where have we seen this type of thinking before.
What would be the difference between SBF being an effective altruist and doing fraud to hoard wealth until his dying day when he would turn it all into a public good, and saying he was and doing everything up until the last step?
The fraud is still illegal. He would still have gone to jail.
His brother was heading some “Guarding against pandemics” non-profit that he was involved with. I believe there was one California proposition they were heavily funding related to it.
I don’t know if it was started in earnest or if there was some ulterior motive.
> He's just been found guilty of fraud on multiple counts, so clearly the whole "Effective altruism" was just a image he was trying to present, while acting completely against it in private.
I don't think that's right. In his head, it wasn't fraud. He was moving money around to backstop losses, sure, and OK, technically it wasn't his. But it was all going to be OK in the end and no one would know. No one was going to lose any money, so no fraud. QED.
In the real world, criminals don't think they're criminals. Everyone's got a good reason for doing what they do.
I think anything SBF did to benefit others was a byproduct or side effect of what he was actually doing, enriching/empowering himself, like you say. Or, at best, a hedge in the form of insurance against bad PR.
Does that change the point? Even if not in earnest, he was able to deflect suspicion wearing a cloak of EA. That, in and of itself, is a problem. Nothing is above reproach, no matter how pious or nerdy.
Is it just me or do a whole lot of EA people just love to keep talking but never do anything actually altruistic?
I mean I get the idea of trying to optimize the value of charitable donations. But donating something is infinitely better than chatting all day and donating nothing.
Even on a small scale, the number of times I've seen non-EA people do something nice is way, way more than the number of times I've seen an EA person come out of their "circle" and do something nice.
The argument is akin to saying bank robbery is a feature not a big when the thief donates his or her takes to charities of soup kitchens or what have you.
His actions made perfect sense from his utilitarian Effective Altruist worldview. He was stealing from rich people, and giving the money to what he saw as "worthy causes."
He was pretty open even before the collapse about how he decided to get as rich as possible, as fast as possible, at all costs, as long as (in his own moral calculus) the net benefits were positive.
EAs are arguably a 'cult' obsessed with AI risk, which they mostly believe will end the world in the next few years. So to them, that pretty much justifies anything that could help mitigate that risk. He would see it as immoral not to become a criminal in order to fund AI risk research.
Personally, I think these AI risk concerns are legitimate, but I don't agree with these methods.
> I'm not sure how anyone could argue that what SBF was doing fits it any way with that. He's just been found guilty of fraud on multiple counts,
EA strikes me as the same sort of ambiguous slime as "breast cancer awareness."
You read the words and think hey, that sounds right. Benefit others as efficiently as possible. Guy's a Robin Hood type, stealing from the rich to benefit others. Good for you, buddy. Godspeed.
Except you look at what he's actually doing and see he's not stealing from the rich and giving to the poor. He's stealing from the foolish and giving to "others," who turn out to be his friends and associates. $500m to Anthropic? $5b for Twitter? This shit isn't charity.
It's kleptocracy masquerading as charity. I can't see his charitable causes as anything more than an ephemeral funds-parking scheme storing funds in a chain of IOUs.
This leads into a whole "theory vs practice" argument that shows up whenever people start talking about communism. If anybody doing Effective Altruism in the real world fails, we are told that they were not doing EA correctly, and it's simply unfortunate that nobody's done it correctly yet. Thus the movement itself can never be discredited by mere experimental evidence.
Most EA adherents focus on process not outcomes. So from an EA position, the fraud outcome doesn’t matter. Imagine if SBF gambles paid off they might say, and that potential outcome must be probablistically weighed against the negative outcome. Since crypto in most EA is viewed as a sin industry, the negative impact is minor relative to the positive impact of SBF political and charity donations, so even a slim chance of success should be taken.
However EA logic is wrong because utilitarianism is wrong. It doesn’t matter whether stealing and fraud create good outcomes or not. Theft and fraud are evil in themselves irrespective of outcomes. To put it in an extreme sense, even if it would save the planet you should still not steal nor defraud others. The fact that some actions are in essence evil is enshrined in the legal system, is commonly accepted, and philosophically sound.
> On November 11, FTX fell apart and was revealed as a giant scam. Suddenly everyone hated effective altruists. Publications that had been feting us a few months before pivoted to saying they knew we were evil all along. I practiced rehearsing the words “I have never donated to charity, and if I did, I certainly wouldn’t care whether it was effective or not”.
As someone who creates data and analysis which get used in setting policy I do find a lot of EA spreadsheet analysis of measured "good" to be very niave to the nature of measurement and classification.
That being said, I think this peice is a bit of an overreaction and there seem to be many earnest actors in the EA community really thinking about how they can do good in the world. SBF is very unfortunate for EA, but to jump from him example to saying all EA practitioners care exclusively about the ends over the means is a bit of a leap, imo.
It's just a bunch of privileged armchair humanitarians who never left the confines of their fancy circles, let alone been confronted to the things they're trying to fix. They think they can fix issues better than NGOs which have had boots on the grounds for decades, just because they know python and excel, as if people actually working on humanitarian causes were benevolent r**ards. Of course, it allows for great intellectual masturbation and self-congratulation, as if fixing complex social/ecological issues was just about "cracking a problem" and presenting a neat 12-page PPT presentation, before moving to the next problem.
If any of these people actually walked the talk, we'd see a lot more one-way tickets to Africa for them to finally be able to employ their beautiful minds on real problems.
For someone outside the space (like me), what’s the big innovation of Effective Altruism? I assume when the rubber hits the road, most people doing big donations have people to look at the effectiveness of that donation.
I guess I’m just suspicious of any community or movement that labels itself as “effective,” because it is hard to believe that they were the first ones to think of the idea of not being ineffective, haha.
This article makes a fundamental mistake that many who have written about EA make - by treating the philosophical and real-world application of EA as the same thing. EA is such a new philosophy and movement that the philosophy and application of EA are not sufficiently divorced from one another, and the people at the core of "philosophy EA" are also involved in "application EA". So this is an easy mistake to make.
There are people in rooms discussing whether "the ends justify the means" (though I don't think anyone is seriously arguing in favor of SBF-type means). BUT THESE ARE PHILOSOPHICAL DISCUSSIONS.
If you asked 1,000 effective altruists if they think what SBF did was acceptable (or gave a hypothetical ends justify the means at 10% of the severity of SBF), I would wager that 0 would say it was acceptable. SBF used EA as a shield to hide his fraudulent behavior, and EA (both the philosophy and application sides) have taken a hard look at what EA argues for, and to think that EA (even philosophy EA) would approve of SBF's behavior do not understand EA at all.
---
I study EA and so I am loosely connected to the movement, but I don't consider myself an effective altruist.
This article misrepresents what EA is about, and unfairly links SBF's criminal behavior to that philosophy.
SBF is a numerically oriented crook.
EA is about attempting to measure and compare different philanthropic approaches in order to optimize where we spend our money, effort and time to benefit humanity. The author incorrectly implies that EA isn't concerned with ethics, or that EA will justify any means to achieve some perceived benefit - but this is the opposite of true. Ethical and moral behavior are required by EA, and in fact are an important part of the utility measured for some philanthropic activity. That is, ethics and morals are worthy goals (or aspects of worthy goals) for EA in and of themselves.
Sam Bankman-Fried was about as high profile an EA as ever existed, with his personal wealth counted as the bulk of their finances, his FTX Future Fund employing both Nick Beckstead and his old friend William Macaskill, and his political action committee throwing money around Washington to promote crypto and longtermism.
Macaskill himself is probably the most famous EA of them all and was in lockstep with SBF for years, dismissing claims of unethical behavior, vouching for him and hooking him up with other rich people like Elon Musk cashing his checks for the charities he controlled, and of course enjoying the finer things in life that FTX could buy without either of these famously ascetic utilitarians could ever imagine doing for themselves.
When Oppenheimer witnessed the first explosion of a nuclear weapon, he quelled his ethical reservations over the destructive power of his creation with a verse from the Bhagavad Gita [1] often mistranslated as the deity stating "I am death, destroyer of worlds", but more accurately - "I am time, and I will destroy these people with or without your involvement".
Had the scientists of the Manhattan project (Oppenheimer, Fermi, Szilard, etc) subscribed to the EA philosophy, they would have been unlikely to work on nuclear weapons development, and millions more would have likely perished in a land invasion of Japan. However, millions of Southeast Asians and South Americans did perish in the subsequent "proxy wars" of the Cold War era, so you can make a convincing historical "what if" either way.
Effective altruism is not a very useful philosophy if you don't actually know what is best for humanity. Oppenheimer's philosophy (the Gita philosophy) was to simply do his job without being attached to the outcome.
That's incorrect. The translation quoted by Oppenheimer is actually more accurate than yours. The other two, more popular are:
"The Supreme Lord said: I am mighty Time, the source of destruction that comes forth to annihilate the worlds. Even without your participation, the warriors arrayed in the opposing army shall cease to exist." [0]
"Bhagavān Śrī Kṛṣṇa said: Time I am, the mighty destroyer of worlds, and I come to vanquish all living beings. Even without your participation, all the warriors on the opposite side of the battlefield will be killed" [1]
I understand the point you're making, but the statement that
> they would have been unlikely to work on nuclear weapons development, and millions more would have likely perished in a land invasion of Japan
despite being constantly repeated, is not reflected by contemporary documents and later historical analysis of decision making among Pentagon and White House officials.
The threat of an impending land invasion was not a consideration at the time when it was decided to attack Japanese civilian centres with nuclear weapons. The primary factor in the decision for their use had far more to do with the risk of Stalin joining the fight on the eastern front and thus securing a claim for territory following the inevitable axis surrender, as well as a desire for US power projection from the demonstration of an atomic weapon in War. The primary delay in Japanese surrender was the question of the fate of Empreror Hirohito, who the US ended up protecting anyway.
This essay is a mess. I won't flag it, but I doubt with such poor definitions it'll make much of a useful conversation on HN.
I counted four topics in the first few paragraphs that the author defined in a poor, self-serving way. Any one of these topics and associated definitions would be interesting to talk about. Put them all together and it's just too much to clean up (for folks taking any kind of issue at all with the thesis or conclusion.)
It was well-structured and cogent, though. Kudos to the author for that. That puts them well above other essays of this type.
One of the problems with effective altruism (and consequentialism more generally) is that it's quite hard to look at SBF and say definitively that his actions had net negative consequences.
Maybe his donations saved lives. Maybe anthropic (which he famously funded) will save the world. Maybe by discrediting EA, SBF saved the world from EA fanatics. You could ennumerate hypotheticals like this forever, positive and negative. It's for this reason that we have to rely on intuitive moral feelings or there's no way to confidently say that anything is good or bad.
That said, I view EA as a call to think more carefully and analytically about our actions and how they affect the world. There's certainly nothing wrong with that as long as it's not taken to bizarre extremes.
This is mentioned in passing in TFA but the fundamental problem of EA, and all charity in general, is that it ignores and often even perpetuates the societal structures that create most of the problems that charity tries to patch up.
Of course if it would try to address the structural problems, it wouldn't be charity but politics. And politics are bad because it could change the structure.
I disagree with this - traditional philanthropy does ignore systematic/societal problems, for many reasons it is simply unable to.
EA tries to look at the bigger picture of effectiveness, and many within EA do believe that political solutions are a good use of resources. For example, many of the new charities created by Charity Entrepreneurship spend their time lobbying governments. Relative to traditional philanthropy, I think EA has a real shot at the systemic changes necessary to make real change.
The biggest logical flaw of effective altruism is valuing potential future lives the same as the lives of existing people. Taking that logic to the extreme it would be okay to kill a person for two additional people being born.
It's way easier to point at SBF as a fraudster within the larger group of crypto fraudsters than a fraudster within the larger group of effective altruist fraudsters.
In the beginning, SBF probably believed in EA. It helped him recruit the executives of AR and FTX.
As FTX experienced the unprecedented growth to fantastical scale, SBF was at the center of it. I strongly suspect he felt deified by it, felt that the market was giving him unqualified approval for his every thought and method.
Somewhere in his ascendancy, I suspect that EA became merely a vocabulary of stock responses that he used to explain his decisions and to frame his public image.
The immorality began when he chose to ignore his fiduciary duty to his depositors, and instead used their funds as if they were VC money available to fund his ideas. The immorality continued when he gave false financial statements to the AR lenders. It culminated when he tweeted "everything is fine" when the withdrawal rush began.
Was he using EA theory to justify these unethical choices? Caroline Ellison thought he was but that was because she was in thrall to his personality.
I would be immensely surprised if EA goals ever crossed his mind when he made these decisions. I suspect he was in empire building mode aiming to enter the pantheon of SV tech titans.
The WSJ had a chart of "where did the money go" showing that only a miniscule slice of the $16B was donated to philanthropic organizations. It was less than $100M.
You are correct that EA has been unmasked as a philosophy unburdened by ethics. However, my view is that SBF only used EA as a convenient label for his motives, when his goals were consolidating his power.
I think this article assigns a lot of blame to Effective Altruism that really belongs in classic narcissism and power tripping.
SBF wasn’t even an idealist’s version of an effective altruist, he basically lied and told everyone that he was one, probably in a vain attempt to explain where all the money went.
That’s not to say that EA doesn’t deserve its own criticism, but SBF was only pretending to be one on TV.
> Michael Lewis, along with a cadre of others, have astonishingly aligned themselves with the EA bamboozlers, steadfastly standing by their erstwhile idol
I read his book and some interviews and this is hyperbole. And poor use of “erstwhile”.
Yeah, the way I read it Lewis comes down on EA fairly hard. Rather his conclusion boils down to seeing EA and SBF's actions as misguided/naive, not intentionally fraudulent (which I disagree with).
Stated without argument: "EA, in its cold calculus, can justify the unjustifiable in pursuit of an ill-defined "greater good." I'd love to hear an actual argument for that. What sort of cold calculus can justify the unjustifiable? Isn't that a contradiction in terms?
I'd love to hear an actual argument for it. I don't want to think that Joan Westenberg (whoever that is) is a purveyor of twisted words.
(There are more examples in the article, I picked one because I like examples.)
EA’s primary philosophical foundation is utilitarianism. Utilitarianism in its standard form only cares about outcomes that maximize global utility for humanity. It does not accept any deontological ethics.
Utilitarianism is evil because it allows evil actions to be taken as long as that maximizes utility, thus justifying the unjustifiable. Utilitarianism is wrong because it is still evil to commit fraud even to save human lives. However utilitarianism is popular for people with power, because they can use it to justify their evil actions as for the greater good. Even if they are sincere, their actions are still evil; even if SBF succeeded instead of failing, and sincerely wanted to stop climate change, he would still and rightfully be a criminal.
At what point was the “feature not bug” argument defended? I missed that on first read and can’t be bothered to spin through again. I’m all for the condemnation of fad ideology on the basis of strong arguments, and SBF fucked up, and EA seems dubious,
But this article seems like it’s not achieving anything
[+] [-] capableweb|2 years ago|reply
I'm not sure how anyone could argue that what SBF was doing fits it any way with that. He's just been found guilty of fraud on multiple counts, so clearly the whole "Effective altruism" was just a image he was trying to present, while acting completely against it in private.
[+] [-] aeternum|2 years ago|reply
[+] [-] Sebb767|2 years ago|reply
That's pretty easy, actually. He seems to see himself as some kind of Robin Hood-esque figure, taking money from the rich and distributing it to the poor (or, more specifically, distributing the money of the rich to the places where its most effective).
The argument is about whether EA naturally leads to this "the ends superseed the means"-type of actions, which is what the article argues.
[+] [-] px43|2 years ago|reply
[+] [-] CyberDildonics|2 years ago|reply
This is so common that short sellers like Kyle Bass actually look for it as an additional red flag when hunting down fraudulent companies.
[+] [-] 1vuio0pswjnm7|2 years ago|reply
It will be interesting how he fares in prison. Martin Shkrelli wants to be his friend. Perhaps they can be pen pals. Maybe he'll get special treatment or early release.
Even if SBF was following the EA playbook to the letter, it did not work. The attempt was ineffective. He failed.
Michael Lewis shared some SBF story where the scenario (the bet) presented to SBF was a chance to make the world better or, if it failed, the entire world would be destroyed. SBF allegedly would take that bet. All or nothing. Where have we seen this type of thinking before.
[+] [-] mplewis|2 years ago|reply
The fraud is still illegal. He would still have gone to jail.
[+] [-] tempsy|2 years ago|reply
I don’t know if it was started in earnest or if there was some ulterior motive.
[+] [-] ajross|2 years ago|reply
I don't think that's right. In his head, it wasn't fraud. He was moving money around to backstop losses, sure, and OK, technically it wasn't his. But it was all going to be OK in the end and no one would know. No one was going to lose any money, so no fraud. QED.
In the real world, criminals don't think they're criminals. Everyone's got a good reason for doing what they do.
[+] [-] FireBeyond|2 years ago|reply
[+] [-] dclowd9901|2 years ago|reply
[+] [-] dheera|2 years ago|reply
I mean I get the idea of trying to optimize the value of charitable donations. But donating something is infinitely better than chatting all day and donating nothing.
Even on a small scale, the number of times I've seen non-EA people do something nice is way, way more than the number of times I've seen an EA person come out of their "circle" and do something nice.
[+] [-] mc32|2 years ago|reply
It’s a ridiculous high school level argument.
[+] [-] UniverseHacker|2 years ago|reply
He was pretty open even before the collapse about how he decided to get as rich as possible, as fast as possible, at all costs, as long as (in his own moral calculus) the net benefits were positive.
EAs are arguably a 'cult' obsessed with AI risk, which they mostly believe will end the world in the next few years. So to them, that pretty much justifies anything that could help mitigate that risk. He would see it as immoral not to become a criminal in order to fund AI risk research.
Personally, I think these AI risk concerns are legitimate, but I don't agree with these methods.
[+] [-] tmaly|2 years ago|reply
[+] [-] jstarfish|2 years ago|reply
EA strikes me as the same sort of ambiguous slime as "breast cancer awareness."
You read the words and think hey, that sounds right. Benefit others as efficiently as possible. Guy's a Robin Hood type, stealing from the rich to benefit others. Good for you, buddy. Godspeed.
Except you look at what he's actually doing and see he's not stealing from the rich and giving to the poor. He's stealing from the foolish and giving to "others," who turn out to be his friends and associates. $500m to Anthropic? $5b for Twitter? This shit isn't charity.
It's kleptocracy masquerading as charity. I can't see his charitable causes as anything more than an ephemeral funds-parking scheme storing funds in a chain of IOUs.
[+] [-] CobrastanJorji|2 years ago|reply
[+] [-] injeolmi_love|2 years ago|reply
However EA logic is wrong because utilitarianism is wrong. It doesn’t matter whether stealing and fraud create good outcomes or not. Theft and fraud are evil in themselves irrespective of outcomes. To put it in an extreme sense, even if it would save the planet you should still not steal nor defraud others. The fact that some actions are in essence evil is enshrined in the legal system, is commonly accepted, and philosophically sound.
[+] [-] sigil|2 years ago|reply
From Scott Alexander’s post on why, as an EA, he donated a kidney. https://www.astralcodexten.com/p/my-left-kidney
[+] [-] letmeinhere|2 years ago|reply
[+] [-] alexmuro|2 years ago|reply
That being said, I think this peice is a bit of an overreaction and there seem to be many earnest actors in the EA community really thinking about how they can do good in the world. SBF is very unfortunate for EA, but to jump from him example to saying all EA practitioners care exclusively about the ends over the means is a bit of a leap, imo.
[+] [-] axlee|2 years ago|reply
If any of these people actually walked the talk, we'd see a lot more one-way tickets to Africa for them to finally be able to employ their beautiful minds on real problems.
[+] [-] bee_rider|2 years ago|reply
I guess I’m just suspicious of any community or movement that labels itself as “effective,” because it is hard to believe that they were the first ones to think of the idea of not being ineffective, haha.
[+] [-] Kevin_S|2 years ago|reply
There are people in rooms discussing whether "the ends justify the means" (though I don't think anyone is seriously arguing in favor of SBF-type means). BUT THESE ARE PHILOSOPHICAL DISCUSSIONS.
If you asked 1,000 effective altruists if they think what SBF did was acceptable (or gave a hypothetical ends justify the means at 10% of the severity of SBF), I would wager that 0 would say it was acceptable. SBF used EA as a shield to hide his fraudulent behavior, and EA (both the philosophy and application sides) have taken a hard look at what EA argues for, and to think that EA (even philosophy EA) would approve of SBF's behavior do not understand EA at all.
---
I study EA and so I am loosely connected to the movement, but I don't consider myself an effective altruist.
[+] [-] Nifty3929|2 years ago|reply
SBF is a numerically oriented crook.
EA is about attempting to measure and compare different philanthropic approaches in order to optimize where we spend our money, effort and time to benefit humanity. The author incorrectly implies that EA isn't concerned with ethics, or that EA will justify any means to achieve some perceived benefit - but this is the opposite of true. Ethical and moral behavior are required by EA, and in fact are an important part of the utility measured for some philanthropic activity. That is, ethics and morals are worthy goals (or aspects of worthy goals) for EA in and of themselves.
[+] [-] letmeinhere|2 years ago|reply
This is some grade A No True Scotsmaning.
Sam Bankman-Fried was about as high profile an EA as ever existed, with his personal wealth counted as the bulk of their finances, his FTX Future Fund employing both Nick Beckstead and his old friend William Macaskill, and his political action committee throwing money around Washington to promote crypto and longtermism.
Macaskill himself is probably the most famous EA of them all and was in lockstep with SBF for years, dismissing claims of unethical behavior, vouching for him and hooking him up with other rich people like Elon Musk cashing his checks for the charities he controlled, and of course enjoying the finer things in life that FTX could buy without either of these famously ascetic utilitarians could ever imagine doing for themselves.
[+] [-] primitivesuave|2 years ago|reply
Had the scientists of the Manhattan project (Oppenheimer, Fermi, Szilard, etc) subscribed to the EA philosophy, they would have been unlikely to work on nuclear weapons development, and millions more would have likely perished in a land invasion of Japan. However, millions of Southeast Asians and South Americans did perish in the subsequent "proxy wars" of the Cold War era, so you can make a convincing historical "what if" either way.
Effective altruism is not a very useful philosophy if you don't actually know what is best for humanity. Oppenheimer's philosophy (the Gita philosophy) was to simply do his job without being attached to the outcome.
1. https://www.holy-bhagavad-gita.org/chapter/11/verse/32
[+] [-] bitcharmer|2 years ago|reply
"The Supreme Lord said: I am mighty Time, the source of destruction that comes forth to annihilate the worlds. Even without your participation, the warriors arrayed in the opposing army shall cease to exist." [0]
"Bhagavān Śrī Kṛṣṇa said: Time I am, the mighty destroyer of worlds, and I come to vanquish all living beings. Even without your participation, all the warriors on the opposite side of the battlefield will be killed" [1]
[0] - https://www.holy-bhagavad-gita.org/chapter/11/verse/32
[1] - https://asitis.com/11/32.html
[+] [-] monadINtop|2 years ago|reply
> they would have been unlikely to work on nuclear weapons development, and millions more would have likely perished in a land invasion of Japan
despite being constantly repeated, is not reflected by contemporary documents and later historical analysis of decision making among Pentagon and White House officials.
The threat of an impending land invasion was not a consideration at the time when it was decided to attack Japanese civilian centres with nuclear weapons. The primary factor in the decision for their use had far more to do with the risk of Stalin joining the fight on the eastern front and thus securing a claim for territory following the inevitable axis surrender, as well as a desire for US power projection from the demonstration of an atomic weapon in War. The primary delay in Japanese surrender was the question of the fate of Empreror Hirohito, who the US ended up protecting anyway.
[+] [-] DanielBMarkham|2 years ago|reply
I counted four topics in the first few paragraphs that the author defined in a poor, self-serving way. Any one of these topics and associated definitions would be interesting to talk about. Put them all together and it's just too much to clean up (for folks taking any kind of issue at all with the thesis or conclusion.)
It was well-structured and cogent, though. Kudos to the author for that. That puts them well above other essays of this type.
[+] [-] mtlmtlmtlmtl|2 years ago|reply
[+] [-] iammjm|2 years ago|reply
[+] [-] slibhb|2 years ago|reply
Maybe his donations saved lives. Maybe anthropic (which he famously funded) will save the world. Maybe by discrediting EA, SBF saved the world from EA fanatics. You could ennumerate hypotheticals like this forever, positive and negative. It's for this reason that we have to rely on intuitive moral feelings or there's no way to confidently say that anything is good or bad.
That said, I view EA as a call to think more carefully and analytically about our actions and how they affect the world. There's certainly nothing wrong with that as long as it's not taken to bizarre extremes.
[+] [-] jampekka|2 years ago|reply
Of course if it would try to address the structural problems, it wouldn't be charity but politics. And politics are bad because it could change the structure.
[+] [-] Kevin_S|2 years ago|reply
EA tries to look at the bigger picture of effectiveness, and many within EA do believe that political solutions are a good use of resources. For example, many of the new charities created by Charity Entrepreneurship spend their time lobbying governments. Relative to traditional philanthropy, I think EA has a real shot at the systemic changes necessary to make real change.
[+] [-] legulere|2 years ago|reply
[+] [-] 8note|2 years ago|reply
[+] [-] tickerticker|2 years ago|reply
As FTX experienced the unprecedented growth to fantastical scale, SBF was at the center of it. I strongly suspect he felt deified by it, felt that the market was giving him unqualified approval for his every thought and method.
Somewhere in his ascendancy, I suspect that EA became merely a vocabulary of stock responses that he used to explain his decisions and to frame his public image.
The immorality began when he chose to ignore his fiduciary duty to his depositors, and instead used their funds as if they were VC money available to fund his ideas. The immorality continued when he gave false financial statements to the AR lenders. It culminated when he tweeted "everything is fine" when the withdrawal rush began.
Was he using EA theory to justify these unethical choices? Caroline Ellison thought he was but that was because she was in thrall to his personality.
I would be immensely surprised if EA goals ever crossed his mind when he made these decisions. I suspect he was in empire building mode aiming to enter the pantheon of SV tech titans.
The WSJ had a chart of "where did the money go" showing that only a miniscule slice of the $16B was donated to philanthropic organizations. It was less than $100M.
You are correct that EA has been unmasked as a philosophy unburdened by ethics. However, my view is that SBF only used EA as a convenient label for his motives, when his goals were consolidating his power.
[+] [-] dangus|2 years ago|reply
SBF wasn’t even an idealist’s version of an effective altruist, he basically lied and told everyone that he was one, probably in a vain attempt to explain where all the money went.
That’s not to say that EA doesn’t deserve its own criticism, but SBF was only pretending to be one on TV.
[+] [-] eterevsky|2 years ago|reply
So his actions are in no way consistent with EA, and shouldn't be considered indicative of it.
[+] [-] cromulent|2 years ago|reply
I read his book and some interviews and this is hyperbole. And poor use of “erstwhile”.
[+] [-] furyofantares|2 years ago|reply
https://thezvi.wordpress.com/2023/10/24/book-review-going-in...
[+] [-] goldfish3|2 years ago|reply
[+] [-] Arnt|2 years ago|reply
I'd love to hear an actual argument for it. I don't want to think that Joan Westenberg (whoever that is) is a purveyor of twisted words.
(There are more examples in the article, I picked one because I like examples.)
[+] [-] injeolmi_love|2 years ago|reply
Utilitarianism is evil because it allows evil actions to be taken as long as that maximizes utility, thus justifying the unjustifiable. Utilitarianism is wrong because it is still evil to commit fraud even to save human lives. However utilitarianism is popular for people with power, because they can use it to justify their evil actions as for the greater good. Even if they are sincere, their actions are still evil; even if SBF succeeded instead of failing, and sincerely wanted to stop climate change, he would still and rightfully be a criminal.
[+] [-] uptownfunk|2 years ago|reply
[+] [-] ChrisMarshallNY|2 years ago|reply
If you watch a herd of gazelles get one of their own munched by a lion, they get the hell out of there.
It seems as if these ones are saying "Well, she's got her fill with poor old Sam, over there, so we're all right to keep eating this sweet grass."
[+] [-] gnarlouse|2 years ago|reply
But this article seems like it’s not achieving anything