The tyranny of the marginal user reminds me of population ethics' The Repugnant Conclusion.[0] This is the conclusion of utilitarianism, where if you have N people each with 10 happiness, well then, it would be better to have 10N people with 1.1 happiness, or 100N people with 0.111 happiness, until you have infinite people with barely any happiness. Substitute profit for happiness, and you get the tyranny of the marginal user.Perhaps the resolutions to the Repugnant Conclusion (Section 2, "Eight Ways of Dealing with the Repugnant Conclusion") can also be applied to the tyranny of the marginal user. Though to be honest, I find none of the resolutions wholly compelling.
[0] https://plato.stanford.edu/ARCHIVES/WIN2009/entries/repugnan...
feoren|2 years ago
First of all, you're imagining dividing happiness among more people, but imagining them all with the same amount of suffering. You're picturing a drudging life where people work all day and have barely any source of happiness. But if you can magically divide up some total amount of happiness, why not the same with suffering? This is the entire source of the word "repugnant", because it sounds like you get infinite suffering with finite happiness. That does not follow from anything utilitarianism stipulates; you've simply created an awful world and falsely called it utilitarianism. Try to imagine all these people living a nearly completely neutral life, erring a bit on the happier side, and it suddenly doesn't sound so bad.
Secondly, you're ignoring the fact that people can create happiness for others. What fixed finite "happiness" resource are we divvying up here? Surely a world with 10 billion people has more great works of art for all to enjoy than a world with 10 people, not to mention far less loneliness. It's crazy to think the total amount of happiness to distribute is independent of the world population.
There are many more reasonable objections to even the existence of that so-called "conclusion" without even starting on the many ways of dealing with it.
galaxyLogic|2 years ago
When there are more immigrants who are allowed to work, the immigrants will make some money for themselves. What do they do with that money? They spend it, which grows the economy. Our economy, not some other country's economy.
If you were the only living person on this planet you would be in trouble. Thank God for other people being there too.
shadowgovt|2 years ago
If you have a sure-fire way to get half the people on the planet to give you $1, you can afford a yacht. Even if it means the tool you make for them only induces them to ever give you that $1 and not more... Why do you care? You have a yacht now. You can contemplate whether you should have made them something more useful from the relative safety and comfort of your yacht.
skybrian|2 years ago
But I think that utilitarianism has a vague but somewhat related problem in treating “utility” as a one-dimensional quantity that you can add up? There are times when adding things together and doing comparisons makes a kind of sense, but it’s an abstraction. Nothing says you ought to quantify and add things up in a particular way, and utilitarianism doesn’t provide a way of resolving disputes about quantifying and adding. Not that it really tries, because it’s furthermore a metaphor about doing math, which isn’t the same thing as doing math.
[1] https://meaningness.com/no-cosmic-meaning
pdonis|2 years ago
Except that it does according to many utilitarians. That's why it has been a topic of discussion for so long.
> you're imagining dividing happiness among more people, but imagining them all with the same amount of suffering
No. "Utility" includes both positive (happiness) and negative (suffering) contributions. The "utility" numbers that are quoted in the argument are the net utility numbers after all happiness and all suffering have been included.
> You're picturing a drudging life where people work all day and have barely any source of happiness.
Or a life with a lot of happiness but also a lot of suffering, so the net utility is close to zero, because the suffering almost cancels out the happiness. (This is one of the key areas where many if not most people's moral intuitions. including mine, do not match up with utilitarianism: happiness and suffering aren't mere numbers and you can't just blithely have them cancel each other that way.)
> if you can magically divide up some total amount of happiness, why not the same with suffering?
Nothing in the argument contradicts this. The argument is not assuming a specific scenario; it is considering all possible scenarios and finding comparisons between them that follow from utilitiarianism, but do not match up with most people's moral intuitions. It is no answer to the argument to point out that there are other comparisons that don't suffer from this problem; utilitarianism claims to be a universal theory of morality and ethics, so if any possible scenario is a problem for it, then it has a problem.
> you're ignoring the fact that people can create happiness for others
But "can" isn't the same as "will". The repugnant conclusion takes into account the possibility that adding more people might not have this consequence. The whole point is that utilitarianism (or more precisely the Total Utility version of utilitarianism, which is the most common version) says that a world with more people is better even if the happiness per person goes down, possibly way down (depending on how many more people you add), which is not what most people's moral intuitions say.
> It's crazy to think the total amount of happiness to distribute is independent of the world population.
The argument never makes this assumption. You are attacking a straw man. Indeed, in the comparisons cited in the argument, the worlds with more people have more total happiness--just less happiness per person.
Murfalo|2 years ago
julianeon|2 years ago
The current world population is about 8 billion.
By this argument, and also by your argument, it should actually be 999 billion. Or a number even higher than that.
The conclusion boils down to:
1. Find maximum population number earth can support.
2. Hit that number.
I do think that, when put this way, it seems simplistic.
tyre|2 years ago
Utilitarianism ought to be about maximizing the happiness (total and distribution) of an existing population. Merging it with natalism isn’t realistic or meaningful, so we end up with these population morality debates. The happiness of a unconceived possible human is null (not the same as zero!)
Compare to Rawls’s Original Position, which uses an unborn person to make the hypothetical work but is ultimately about optimizing for happiness in an existing population.
We really shouldn’t get ourselves tied into knots about the possibility of pumping out trillions of humans because an algorithm says they’ll be marginally net content. That’s not the end goal of any reasonable, practical, or sane system of ethics.
chongli|2 years ago
Here I am replying to you 3 hours later. In the mean time, close to 20,000 people have died around the world [1]. Thousands more have been born. So if we're to move outside the realm of academics, as you put it, we force ourselves to contend with the fact that there is no "existing population" to maximize happiness for. The population is perhaps better thought of as a river of people, always flowing out to sea.
The Repugnant Conclusion is relevant, perhaps now more than at any time in the past, because we've begun to grasp -- scientifically, if not politically -- the finitude of earth's resources. By continuing the way we are, toward ever-increasing consumption of resources and ever-growing inequality, we are racing towards humanitarian disasters the likes of which have never been seen before.
[1] https://www.medindia.net/patients/calculators/world-death-cl...
dragonwriter|2 years ago
That's a somewhat-similar alternative to utilitarianism. Which has its own kind of repugnant conclusions, in part as a result of the same flawed premises: that utililty experienced by different people is a quantity with common objective units that can meaningfully summed, and given that, morality is defined by maximizing that sum across some universe of analysis. It differs from by-the-book utilitarianism in changing the universe of analysis, which changes the precise problems the flawed premises produce, but doesn't really solve anything fundamentally.
> Compare to Rawls’s Original Position, which uses an unborn person to make the hypothetical work but is ultimately about optimizing for happiness in an existing population.
No, its not; the Original Position neither deals with a fixed existing population nor is about optimizing for happiness in the summed-utility sense. Its more about optimizing the risk adjusted distribution of the opportunity for happiness.
salawat|2 years ago
Are you sure you aren't sharing the world with people who do not adhere to reasonable, practical, or sane system of ethics?
Because, ngl, lately, I'm not so sure I can offer an affirmative on that one, making "Getting tied into knots about the possibility of pumping out trillions of humans because an algorithm says they’ll be marginally net content" a reasonable thing to be trying to cut a la the Gordian knot.
After all, that very thing, "pump out trillions of humans because some algorithm (genetics, instincts, & culture taken collectively) says they'll be marginally more content" is modus operandi for humanity, with shockingly little appreciation for the externalities therein involved.
caturopath|2 years ago
> Utilitarianism ought to be about maximizing the happiness (total and distribution) of an existing population
For those who accept your claim above, lots of stuff follows, but your claim is a bold assertion that isn't accepted by everyone involved, or even many people involved.
The repugnant conclusion is a thought experiment where one starts with certain stripped-down claims not including yours here and follow it to its logical conclusion. This is worth doing because many people find it plausible that those axioms define a good ethical system, but the fact they require the repugnant conclusion causes people to say "Something in here seems to be wrong or incomplete." People have proposed many alternate axioms, and your take is just one which isn't popular.
I suspect part of the reason yours isn't popular is
- People seek axiological answers from their ethical systems, so they wish to be able to answer "Are these two unlike worlds better?" -- even if they aren't asking "What action should I take?" Many people want to know "What is better?" so they explore questions of what are better, period, and something they want is to always to have such questions be answerable. Some folks have explored a concept along the lines of yours, where sometimes there just isn't a comparison available, but giving up on being able to compare every pair isn't popular.
- We actually make decisions or imagine the ability to make future real decisions that result in there being more or fewer persons. Is it right to have kids? Is it right to subsidize childbearing? Is it right to attempt to make a ton of virtual persons?
> The happiness of a unconceived possible human is null (not the same as zero!)
Okay, if you say "Total utilitarianism (and all similar things) are wrong", then of course you don't reach the repugnant conclusion via Parfit's argument. "A, B, C implies D", "Well, not B" is not a very interesting argument here.
Your null posing also doesn't really answer how we _should_ handle questions of what to do that result in persons being created or destroyed.
> We really shouldn’t get ourselves tied into knots about the possibility of pumping out trillions of humans because an algorithm says they’ll be marginally net content. That’s not the end goal of any reasonable, practical, or sane system of ethics.
Okay, what is the end goal? If you'll enlighten us, then we can all know.
Until then, folks are going to keep trying to figure it out. Parfit explored a system that many people might have thought sounded good on its premises, but proved it led to the repugnant conclusion. The normal reaction is, "Okay, that wasn't the right recipe. Let's keep looking. I want to find a better recipe so I know what to do in hard, real cases." Since such folks rejected the ethical system because it led to the repugnant conclusion, they could be less confident in its prescriptions in more practical situations -- they know that the premises of the system don't reflect what they want to adopt as their ethical system.
coldtea|2 years ago
Not even for academics. It's something for "rational"-bros.
PaulDavisThe1st|2 years ago
tasty_freeze|2 years ago
All of his work with effective altruism is aimed at reducing suffering of those worst off in the world and spends no time with how to make the well off even happier.
onlyrealcuzzo|2 years ago
There's many ways you can interpret that, though.
But I think if you say, before we had 1 apple per person, and now we have 2x as many apples, but they're all owned by one person - that's hard to argue it's utilitarian.
If before you had 100 apples, and everyone who wanted one had one, and now you have 10,000 apples distributed to people at random, but only 1 in 100 people who wants one has one - that also seems hard to argue as utilitarian.
Businesses are value maximization functions. They'll only be utilitarian if that happens to maximize value.
In the case of software - if you go from 1m users to 10m users - that doesn't imply utilitarianism. It implies that was good for gaming some metric - which more often than not these days is growth, not profit.
tshaddox|2 years ago
fouronnes3|2 years ago
jancsika|2 years ago
I much prefer, "I'd rather have a bottle in front of me than a frontal lobotomy." At least in that case nobody will confuse a trucker hat slogan for a viable system of ethics.
hammock|2 years ago
https://medium.com/incerto/the-most-intolerant-wins-the-dict...
crabbone|2 years ago
Utilitarianism opens a lot of questions about comparability of utility (or happiness) of different people as well as summation. Is it a totally ordered set? Is it a partially ordered set? Perhaps utility is incomparable (that'd be sad and kind of defeat the whole doctrine, but still).
Also, can unhappiness be compensated by happiness? We unthinkingly rush to treat unhappiness as we would negative numbers and try to sum that with happiness, but what if it doesn't work? What if the person who has no happiness or unhappiness isn't in the same place as the person who is equally happy and unhappy (their dog died, but they found a million $ on the same day)?
A more typical classroom question would be about chopping up a healthy person for organs to fix X unhealthy people -- is there a number of unhealthy people which would justify killing a healthy person for spare parts?
mvdtnz|2 years ago
burnished|2 years ago
Your second sentence rankles the hell out of me, you're only able to make that snap judgement to this because of your exposure to academic philosophy (where do you think that example that demonstrates the problem so clearly comes from?), but are completely unaware of that.
The bullshitters aren't puzzling at seemingly simple things, they're writing content free fluff.
patmcc|2 years ago
It's actually a hard problem to design a perfect moral system, that's why people have been trying for literally thousands of years.
RugnirViking|2 years ago
You may feel so certain that they're just too wrapped up in their nonsense that they can't see what you see. But at the very least you will have to learn it the way they learned it if you want to be effective at communicating with them to articulate what you think is wrong and convince people. And in doing so you'll likely realise that far from some unquestioned truth, every conclusion in the field is subject to vigorous debate, and hundreds and thousands of pages and criticisms and rebuttals exist for any conclusion you care about. And for it to get as big as it is such that you, a person hearing about it from outside, there must at least be something interesting and worth examining going on there.
For a prime example, see all the retired engineers who decide that because they can't read a paper on quantum physics with their calculus background, the phsyicists must be overcomplicating it, and bombard them constantly with mail about their own crackpot theories. You don't want to be that person.
wilg|2 years ago
I don't see how you couldn't value other people existing – I think they have just as much of a right to experience the universe as I do.
saint_fiasco|2 years ago
If you don't like the repugnant conclusion you have to change something in the conditions of the environment so that you make it not be true. Arguing against it and calling your refutation obvious doesn't do anything.
unknown|2 years ago
[deleted]
oatmeal1|2 years ago
Second, what's "obvious to everyone" is just based on how you're phrasing the question. If you suggested to people it would be better if the population were just one deliriously happy person with N=50, vs 5 happy people with N=10.1, people would say obviously it would be better to spread the wealth and increase overall happiness.
scythe|2 years ago
Not so for B2C SaaS. The utilities are always measured in dollars and they always aggregate by simple addition. You can't simply redefine the problem away by changing the economic assumptions, because they exist in physical space and not in theory space.
wilg|2 years ago
Like it seems like you have to take "worth living" seriously, since that is the element that is doing all the work. If it's worth living, you've factored in everything that matters already.
mhb|2 years ago
mercenario|2 years ago
1) Population isn't infinite, you can't continue this for too long
2) Your assumption completely depends on how costly it is to increase +1 happiness and to increase +1 user, you don't even mention it. And these costs are not fixed, it increases, so even if it is cheaper to add +1 user in the beginning, it will not continue to be cheaper indefinitely
So, nothing is preventing you from increasing happiness at the same time you increase users.
didibus|2 years ago
This is perfect, because now they are all equally incentivized to do something about it. They're motivated to work together and collaborate for change.
If you do any other split where some people will be very happy and others very unhappy, you've now created certain category of people who are incentivized to maintain the current system and repress any desire for change from the unhappy people.
Ensorceled|2 years ago
That may be true for some things, but for many decisions it is not true.
There is enough food to feed everyone if we choose to distribute it properly. There is enough housing to house everyone. etc. etc.
There may not be enough cardiologists or Dali originals ...
Terr_|2 years ago
That reminds me of the SMBC "Existifier" comic, which satirizes the idea that merely helping something exist is morally positive.
https://www.smbc-comics.com/comic/existence
coldtea|2 years ago
Vt71fcAqt7|2 years ago