top | item 37512242

(no title)

bdcs | 2 years ago

The tyranny of the marginal user reminds me of population ethics' The Repugnant Conclusion.[0] This is the conclusion of utilitarianism, where if you have N people each with 10 happiness, well then, it would be better to have 10N people with 1.1 happiness, or 100N people with 0.111 happiness, until you have infinite people with barely any happiness. Substitute profit for happiness, and you get the tyranny of the marginal user.

Perhaps the resolutions to the Repugnant Conclusion (Section 2, "Eight Ways of Dealing with the Repugnant Conclusion") can also be applied to the tyranny of the marginal user. Though to be honest, I find none of the resolutions wholly compelling.

[0] https://plato.stanford.edu/ARCHIVES/WIN2009/entries/repugnan...

discuss

order

feoren|2 years ago

That conclusion is not repugnant at all, it's just that its phrasing is so simplistic as to be nearly a straw-man. It's a poisoned intuition pump, because it makes you imagine a situation that doesn't follow at all from utilitarianism.

First of all, you're imagining dividing happiness among more people, but imagining them all with the same amount of suffering. You're picturing a drudging life where people work all day and have barely any source of happiness. But if you can magically divide up some total amount of happiness, why not the same with suffering? This is the entire source of the word "repugnant", because it sounds like you get infinite suffering with finite happiness. That does not follow from anything utilitarianism stipulates; you've simply created an awful world and falsely called it utilitarianism. Try to imagine all these people living a nearly completely neutral life, erring a bit on the happier side, and it suddenly doesn't sound so bad.

Secondly, you're ignoring the fact that people can create happiness for others. What fixed finite "happiness" resource are we divvying up here? Surely a world with 10 billion people has more great works of art for all to enjoy than a world with 10 people, not to mention far less loneliness. It's crazy to think the total amount of happiness to distribute is independent of the world population.

There are many more reasonable objections to even the existence of that so-called "conclusion" without even starting on the many ways of dealing with it.

galaxyLogic|2 years ago

Your post reminds me of xenophobes who lament the arrival of immigrants. The immigrants are taking their jobs they are saying. Such a viewpoint can be countered with the imaginary scenario where you live in a country with only 2 people. How well are they doing? There are no stores to buy goodies from because who would create such a store for just 2 people? Perhaps an immigrant, could open a deli!

When there are more immigrants who are allowed to work, the immigrants will make some money for themselves. What do they do with that money? They spend it, which grows the economy. Our economy, not some other country's economy.

If you were the only living person on this planet you would be in trouble. Thank God for other people being there too.

shadowgovt|2 years ago

All of this having been said, replacing happiness with revenue makes chasing marginal users make a lot of sense.

If you have a sure-fire way to get half the people on the planet to give you $1, you can afford a yacht. Even if it means the tool you make for them only induces them to ever give you that $1 and not more... Why do you care? You have a yacht now. You can contemplate whether you should have made them something more useful from the relative safety and comfort of your yacht.

skybrian|2 years ago

Yes, more generally, I’m reminded of David Chapman’s essay, “No Cosmic Meaning” [1]. Thought experiments are a good way to depress yourself if you take them seriously.

But I think that utilitarianism has a vague but somewhat related problem in treating “utility” as a one-dimensional quantity that you can add up? There are times when adding things together and doing comparisons makes a kind of sense, but it’s an abstraction. Nothing says you ought to quantify and add things up in a particular way, and utilitarianism doesn’t provide a way of resolving disputes about quantifying and adding. Not that it really tries, because it’s furthermore a metaphor about doing math, which isn’t the same thing as doing math.

[1] https://meaningness.com/no-cosmic-meaning

pdonis|2 years ago

> a situation that doesn't follow at all from utilitarianism

Except that it does according to many utilitarians. That's why it has been a topic of discussion for so long.

> you're imagining dividing happiness among more people, but imagining them all with the same amount of suffering

No. "Utility" includes both positive (happiness) and negative (suffering) contributions. The "utility" numbers that are quoted in the argument are the net utility numbers after all happiness and all suffering have been included.

> You're picturing a drudging life where people work all day and have barely any source of happiness.

Or a life with a lot of happiness but also a lot of suffering, so the net utility is close to zero, because the suffering almost cancels out the happiness. (This is one of the key areas where many if not most people's moral intuitions. including mine, do not match up with utilitarianism: happiness and suffering aren't mere numbers and you can't just blithely have them cancel each other that way.)

> if you can magically divide up some total amount of happiness, why not the same with suffering?

Nothing in the argument contradicts this. The argument is not assuming a specific scenario; it is considering all possible scenarios and finding comparisons between them that follow from utilitiarianism, but do not match up with most people's moral intuitions. It is no answer to the argument to point out that there are other comparisons that don't suffer from this problem; utilitarianism claims to be a universal theory of morality and ethics, so if any possible scenario is a problem for it, then it has a problem.

> you're ignoring the fact that people can create happiness for others

But "can" isn't the same as "will". The repugnant conclusion takes into account the possibility that adding more people might not have this consequence. The whole point is that utilitarianism (or more precisely the Total Utility version of utilitarianism, which is the most common version) says that a world with more people is better even if the happiness per person goes down, possibly way down (depending on how many more people you add), which is not what most people's moral intuitions say.

> It's crazy to think the total amount of happiness to distribute is independent of the world population.

The argument never makes this assumption. You are attacking a straw man. Indeed, in the comparisons cited in the argument, the worlds with more people have more total happiness--just less happiness per person.

Murfalo|2 years ago

Thank you for this! I have very similar thoughts. Felt like I was going crazy each time I saw these types of conversations sparked by mention of the "repugnant" conclusion...

julianeon|2 years ago

Here's a simpler way to phrase the problem.

The current world population is about 8 billion.

By this argument, and also by your argument, it should actually be 999 billion. Or a number even higher than that.

The conclusion boils down to:

1. Find maximum population number earth can support.

2. Hit that number.

I do think that, when put this way, it seems simplistic.

tyre|2 years ago

The Repugnant Conclusion is one of those silly problems in philosophy that don’t make much sense outside of academics.

Utilitarianism ought to be about maximizing the happiness (total and distribution) of an existing population. Merging it with natalism isn’t realistic or meaningful, so we end up with these population morality debates. The happiness of a unconceived possible human is null (not the same as zero!)

Compare to Rawls’s Original Position, which uses an unborn person to make the hypothetical work but is ultimately about optimizing for happiness in an existing population.

We really shouldn’t get ourselves tied into knots about the possibility of pumping out trillions of humans because an algorithm says they’ll be marginally net content. That’s not the end goal of any reasonable, practical, or sane system of ethics.

chongli|2 years ago

Rawls's original position and the veil-of-ignorance he uses to support it has a major weakness: it's a time-slice theory. Your whole argument rests on it. You're talking about the "existing population" at some particular moment in time.

Here I am replying to you 3 hours later. In the mean time, close to 20,000 people have died around the world [1]. Thousands more have been born. So if we're to move outside the realm of academics, as you put it, we force ourselves to contend with the fact that there is no "existing population" to maximize happiness for. The population is perhaps better thought of as a river of people, always flowing out to sea.

The Repugnant Conclusion is relevant, perhaps now more than at any time in the past, because we've begun to grasp -- scientifically, if not politically -- the finitude of earth's resources. By continuing the way we are, toward ever-increasing consumption of resources and ever-growing inequality, we are racing towards humanitarian disasters the likes of which have never been seen before.

[1] https://www.medindia.net/patients/calculators/world-death-cl...

dragonwriter|2 years ago

> Utilitarianism ought to be about maximizing the happiness (total and distribution) of an existing population.

That's a somewhat-similar alternative to utilitarianism. Which has its own kind of repugnant conclusions, in part as a result of the same flawed premises: that utililty experienced by different people is a quantity with common objective units that can meaningfully summed, and given that, morality is defined by maximizing that sum across some universe of analysis. It differs from by-the-book utilitarianism in changing the universe of analysis, which changes the precise problems the flawed premises produce, but doesn't really solve anything fundamentally.

> Compare to Rawls’s Original Position, which uses an unborn person to make the hypothetical work but is ultimately about optimizing for happiness in an existing population.

No, its not; the Original Position neither deals with a fixed existing population nor is about optimizing for happiness in the summed-utility sense. Its more about optimizing the risk adjusted distribution of the opportunity for happiness.

salawat|2 years ago

>We really shouldn’t get ourselves tied into knots about the possibility of pumping out trillions of humans because an algorithm says they’ll be marginally net content. That’s not the end goal of any reasonable, practical, or sane system of ethics.

Are you sure you aren't sharing the world with people who do not adhere to reasonable, practical, or sane system of ethics?

Because, ngl, lately, I'm not so sure I can offer an affirmative on that one, making "Getting tied into knots about the possibility of pumping out trillions of humans because an algorithm says they’ll be marginally net content" a reasonable thing to be trying to cut a la the Gordian knot.

After all, that very thing, "pump out trillions of humans because some algorithm (genetics, instincts, & culture taken collectively) says they'll be marginally more content" is modus operandi for humanity, with shockingly little appreciation for the externalities therein involved.

caturopath|2 years ago

I think you might be missing a big part of what this sort of philosophy is really about.

> Utilitarianism ought to be about maximizing the happiness (total and distribution) of an existing population

For those who accept your claim above, lots of stuff follows, but your claim is a bold assertion that isn't accepted by everyone involved, or even many people involved.

The repugnant conclusion is a thought experiment where one starts with certain stripped-down claims not including yours here and follow it to its logical conclusion. This is worth doing because many people find it plausible that those axioms define a good ethical system, but the fact they require the repugnant conclusion causes people to say "Something in here seems to be wrong or incomplete." People have proposed many alternate axioms, and your take is just one which isn't popular.

I suspect part of the reason yours isn't popular is

- People seek axiological answers from their ethical systems, so they wish to be able to answer "Are these two unlike worlds better?" -- even if they aren't asking "What action should I take?" Many people want to know "What is better?" so they explore questions of what are better, period, and something they want is to always to have such questions be answerable. Some folks have explored a concept along the lines of yours, where sometimes there just isn't a comparison available, but giving up on being able to compare every pair isn't popular.

- We actually make decisions or imagine the ability to make future real decisions that result in there being more or fewer persons. Is it right to have kids? Is it right to subsidize childbearing? Is it right to attempt to make a ton of virtual persons?

> The happiness of a unconceived possible human is null (not the same as zero!)

Okay, if you say "Total utilitarianism (and all similar things) are wrong", then of course you don't reach the repugnant conclusion via Parfit's argument. "A, B, C implies D", "Well, not B" is not a very interesting argument here.

Your null posing also doesn't really answer how we _should_ handle questions of what to do that result in persons being created or destroyed.

> We really shouldn’t get ourselves tied into knots about the possibility of pumping out trillions of humans because an algorithm says they’ll be marginally net content. That’s not the end goal of any reasonable, practical, or sane system of ethics.

Okay, what is the end goal? If you'll enlighten us, then we can all know.

Until then, folks are going to keep trying to figure it out. Parfit explored a system that many people might have thought sounded good on its premises, but proved it led to the repugnant conclusion. The normal reaction is, "Okay, that wasn't the right recipe. Let's keep looking. I want to find a better recipe so I know what to do in hard, real cases." Since such folks rejected the ethical system because it led to the repugnant conclusion, they could be less confident in its prescriptions in more practical situations -- they know that the premises of the system don't reflect what they want to adopt as their ethical system.

coldtea|2 years ago

>The Repugnant Conclusion is one of those silly problems in philosophy that don’t make much sense outside of academics.

Not even for academics. It's something for "rational"-bros.

PaulDavisThe1st|2 years ago

Many versions of utilitarianism never specified the function to compute the sum for the many. Your example assumes that the function is simple addition, but others have been proposed that reflect some of the complexities of the human condition a little more explicitly (e.g. sad neighbors make neighbors sad).

tasty_freeze|2 years ago

Reinforcing your point, Peter Singer, philosopher and noted utilitarian, has explicitly said that he weights misery far more than happiness in his own framework. From a personal level, he said he'd give up the 10 best days of his life to remove the one worst day of his life (or something like that).

All of his work with effective altruism is aimed at reducing suffering of those worst off in the world and spends no time with how to make the well off even happier.

onlyrealcuzzo|2 years ago

Yeah, utilitarianism means you want to act in a way that's beneficial to most people.

There's many ways you can interpret that, though.

But I think if you say, before we had 1 apple per person, and now we have 2x as many apples, but they're all owned by one person - that's hard to argue it's utilitarian.

If before you had 100 apples, and everyone who wanted one had one, and now you have 10,000 apples distributed to people at random, but only 1 in 100 people who wants one has one - that also seems hard to argue as utilitarian.

Businesses are value maximization functions. They'll only be utilitarian if that happens to maximize value.

In the case of software - if you go from 1m users to 10m users - that doesn't imply utilitarianism. It implies that was good for gaming some metric - which more often than not these days is growth, not profit.

tshaddox|2 years ago

Which conceivable method of summing is the least problematic? Depending on the summing method you might find yourself advocating creating as many people as possible with positive utility, or eliminate everyone with below-average utility, etc.

fouronnes3|2 years ago

Assuming linearity of utility either in individuals or in aggregation is a very common straw man of utilitarianism.

jancsika|2 years ago

> (e.g. sad neighbors make neighbors sad)

I much prefer, "I'd rather have a bottle in front of me than a frontal lobotomy." At least in that case nobody will confuse a trucker hat slogan for a viable system of ethics.

crabbone|2 years ago

One way to deal with this problem is to ask why do we use the arithmetic sum to calculate the total happiness?. There are plenty of ways this can go. Say, if you believe that two very happy people are better than four half as happy people, then you can define this sum function as sum(happiness_per_person) / number_of_people. Of course, this isn't the only way.

Utilitarianism opens a lot of questions about comparability of utility (or happiness) of different people as well as summation. Is it a totally ordered set? Is it a partially ordered set? Perhaps utility is incomparable (that'd be sad and kind of defeat the whole doctrine, but still).

Also, can unhappiness be compensated by happiness? We unthinkingly rush to treat unhappiness as we would negative numbers and try to sum that with happiness, but what if it doesn't work? What if the person who has no happiness or unhappiness isn't in the same place as the person who is equally happy and unhappy (their dog died, but they found a million $ on the same day)?

A more typical classroom question would be about chopping up a healthy person for organs to fix X unhealthy people -- is there a number of unhealthy people which would justify killing a healthy person for spare parts?

mvdtnz|2 years ago

Why would anyone think that a large overall pool of happiness is somehow better than a high per capita happiness? This seems like the kind of thing that's incredibly obvious to everyone but the academic philosopher.

burnished|2 years ago

They do not, thats the point. If you start with a simple and reasonable sounding premise ('it is ethically correct to choose the option that maximizes happiness') but it leads to obviously absurd or inhuman outcomes then you might not want to adopt those principles.

Your second sentence rankles the hell out of me, you're only able to make that snap judgement to this because of your exposure to academic philosophy (where do you think that example that demonstrates the problem so clearly comes from?), but are completely unaware of that.

The bullshitters aren't puzzling at seemingly simple things, they're writing content free fluff.

patmcc|2 years ago

Maximizing for per-capita happiness just leads to the other end of the same problem - fewer and fewer people with the same "happiness units" spread among them. Thus we should strictly limit breeding and kill people at age X+5 (X always being my age, of course).

It's actually a hard problem to design a perfect moral system, that's why people have been trying for literally thousands of years.

RugnirViking|2 years ago

I suggest in general, when approaching a conclusion of a field that you find unintuitive or overcomplicated, to try to recognise that thought pattern and swallow your pride. Its an incredibly common reaction of educated people in one area to see another area and be like "wow why are they overcomplicating it so much they must all be blind to the obvious problems" as though literally every new student in that field doesn't ask the same questions they're asking. Heck I do it all the time, most recently when starting learning music theory.

You may feel so certain that they're just too wrapped up in their nonsense that they can't see what you see. But at the very least you will have to learn it the way they learned it if you want to be effective at communicating with them to articulate what you think is wrong and convince people. And in doing so you'll likely realise that far from some unquestioned truth, every conclusion in the field is subject to vigorous debate, and hundreds and thousands of pages and criticisms and rebuttals exist for any conclusion you care about. And for it to get as big as it is such that you, a person hearing about it from outside, there must at least be something interesting and worth examining going on there.

For a prime example, see all the retired engineers who decide that because they can't read a paper on quantum physics with their calculus background, the phsyicists must be overcomplicating it, and bombard them constantly with mail about their own crackpot theories. You don't want to be that person.

wilg|2 years ago

It's just a question of if you value other people existing or not. If you don't, focus on per-capita happiness, if you do then you focus on meeting a minimum threshold of happiness for everyone.

I don't see how you couldn't value other people existing – I think they have just as much of a right to experience the universe as I do.

saint_fiasco|2 years ago

In this particular case, it's because the success of an ad-funded service depends on the amount of users it has.

If you don't like the repugnant conclusion you have to change something in the conditions of the environment so that you make it not be true. Arguing against it and calling your refutation obvious doesn't do anything.

oatmeal1|2 years ago

First, the phrasing is confusing, because it's not clear whether people with very low happiness measured in terms of N are what we consider unhappy/sad, which is actually negative utility. I believe with this measure, positive N means someone is more happy than they are unhappy.

Second, what's "obvious to everyone" is just based on how you're phrasing the question. If you suggested to people it would be better if the population were just one deliriously happy person with N=50, vs 5 happy people with N=10.1, people would say obviously it would be better to spread the wealth and increase overall happiness.

scythe|2 years ago

The problem is that the "repugnant conclusion" is a matter of definitions. A moral theory is (basically) freely chosen: you can change the definitions whenever you like.

Not so for B2C SaaS. The utilities are always measured in dollars and they always aggregate by simple addition. You can't simply redefine the problem away by changing the economic assumptions, because they exist in physical space and not in theory space.

wilg|2 years ago

I've never understood this problem. To me, it seems that since you've defined a minimum "worth living" amount of happiness and unbounded population, it makes complete sense that the answer would be that it is better to have lots of people whose life is worth living rather than fewer. Is it not tautological?

Like it seems like you have to take "worth living" seriously, since that is the element that is doing all the work. If it's worth living, you've factored in everything that matters already.

mhb|2 years ago

If you pack the whole problem into a definition of "worth living", then you're right. But the premise is that there is a range from extreme misery through neutral through extremely happy. The repugnant conclusion is that it is better to have many people in a state that is barely above neutral.

mercenario|2 years ago

> This is the conclusion of utilitarianism, where if you have N people each with 10 happiness, well then, it would be better to have 10N people with 1.1 happiness, or 100N people with 0.111 happiness, until you have infinite people with barely any happiness

1) Population isn't infinite, you can't continue this for too long

2) Your assumption completely depends on how costly it is to increase +1 happiness and to increase +1 user, you don't even mention it. And these costs are not fixed, it increases, so even if it is cheaper to add +1 user in the beginning, it will not continue to be cheaper indefinitely

So, nothing is preventing you from increasing happiness at the same time you increase users.

didibus|2 years ago

I really don't see the issue with your happiness split. You have 10 people, and they're are equally unhappy.

This is perfect, because now they are all equally incentivized to do something about it. They're motivated to work together and collaborate for change.

If you do any other split where some people will be very happy and others very unhappy, you've now created certain category of people who are incentivized to maintain the current system and repress any desire for change from the unhappy people.

Ensorceled|2 years ago

Every time I've engaged in debate over this, it always comes down to believing that the world is zero sum and there is a limited amount of "happiness" that can be distributed.

That may be true for some things, but for many decisions it is not true.

There is enough food to feed everyone if we choose to distribute it properly. There is enough housing to house everyone. etc. etc.

There may not be enough cardiologists or Dali originals ...

coldtea|2 years ago

AKA the Repugnantly Ignorant in the Human-Ways Nerd's Idea of Ethics conclusion!

Vt71fcAqt7|2 years ago

There is a minimum happiness threshold mH. We can increase population P until happiness H reaches mH, give or take some depending on how close you want to get to mH.