top | item 38363013

(no title)

chronofar | 2 years ago

Not really at all, if we were to try and reduce EA to a pithy one-liner it would be more like "if one aspires to do good (altruism), one should use evidenced based data to determine how they can do the most good with their limited resources (effective)."

discuss

order

skohan|2 years ago

I think that's one interpretation, but the perverse interpretation is: "If I am an effective altruist, and I will use my resources to produce the most possible good, then it's a moral imperative to acquire as much money and power as I can, to maximize the results I can achieve"

lesuorac|2 years ago

That only works if the money and power you acquire is enough to pay somebody else to do the work instead of you.

If you don't make enough to sustain yourself and others to work on a cause more useful than a cause you could work yourself on then it doesn't make sense. Basically taking a standard 95k non-fang SWE job isn't better than taking a 50K SWE job at say Red Cross.

Also if you can get some 10M job at OpenAI but you also are certain that you could use those same skills to independent develop some drug that will say ~12M in lives/yr then you shouldn't go to OpenAI.

But there's also no true scotsman.

chronofar|2 years ago

Well I don't think that's really an interpretation so much as it is a strategy certain people may take, and as you're alluding to a very fallible one at that. It certainly could be a reasonable strategy to amass wealth and power and wield that for altruistic reasons, such a person does indeed stand to be much more effective than most. But of course people are quite corruptible and intentions can be fuzzy, so there's no guarantees that person will follow through, or that whatever they had to do to amass that money/wealth was worth it for the ultimate good.

Again one can look at the underlying tenets of such a creed and evaluate them for their conceptual merit without blasting the entire enterprise because some people abuse them for personal gain.

mejutoco|2 years ago

And what evidence can one really collect of how to do the most good? that involves predicting the future.

I prefer the doctors approach: "first, do no harm".

"The end justifies the means" is a useful concept that has been talked about many times. I would not dismiss it. I do not think it is reducing EA. I think it is communicating it with clarity, without confusing additions. It is its essence.

Someone claims (let's say to simplify they are not lying) to want to maximize their good in the world, for that to be accomplished they need to do something that somebody does not approve of (only caring about money, being rude to you, whatever it is). That is literally "the end justifies the means", where the end is "doing good in the world according to model x".

More often than not, that is just an excuse to do something despicable under the excuse of the future good.

> Evidence-based data to determine how they can do the most good

This is a model. If the model is wrong you might cause more harm than good. So, if EA is honest to work this model needs some proof that it works. Otherwise EA is only successful in becoming popular as an idea, not in accomplishing its stated objectives.

Does earning a lot of money and donating it to charities create more good? I am not sure. Maybe earning less money and care for people in your everyday life would create more good.

chronofar|2 years ago

> And what evidence can one really collect of how to do the most good?

This is indeed a central question to EA, one various proponents attempt to answer in various ways.

> Someone claims (let's say to simplify they are not lying) to want to maximize their good in the world, for that to be accomplished they need to do something that somebody does not approve of (only caring about money, being rude to you, whatever it is).

Your assumption here appears to be that anyone subscribing to an evidence-based approach to do the most good (ie EA) must also inherently subscribe to the "ends justify the means." These aren't inextricably linked, it's quite possible to have one and not the other. One can quite rationally seek to maximize their altruistic effectiveness without sacrificing general decency in their day to day life. Morals usually have some nuance.

> This is a model. If the model is wrong you might cause more harm than good.

I'm not sure what your point is. This is true of most any practical application of a moral framework. EA is more about providing a methodology than providing answers.