Hacker News new | past | comments | ask | show | jobs | submit login

The tyranny of the marginal user reminds me of population ethics' The Repugnant Conclusion.[0] This is the conclusion of utilitarianism, where if you have N people each with 10 happiness, well then, it would be better to have 10N people with 1.1 happiness, or 100N people with 0.111 happiness, until you have infinite people with barely any happiness. Substitute profit for happiness, and you get the tyranny of the marginal user.

Perhaps the resolutions to the Repugnant Conclusion (Section 2, "Eight Ways of Dealing with the Repugnant Conclusion") can also be applied to the tyranny of the marginal user. Though to be honest, I find none of the resolutions wholly compelling.

[0] https://plato.stanford.edu/ARCHIVES/WIN2009/entries/repugnan...




That conclusion is not repugnant at all, it's just that its phrasing is so simplistic as to be nearly a straw-man. It's a poisoned intuition pump, because it makes you imagine a situation that doesn't follow at all from utilitarianism.

First of all, you're imagining dividing happiness among more people, but imagining them all with the same amount of suffering. You're picturing a drudging life where people work all day and have barely any source of happiness. But if you can magically divide up some total amount of happiness, why not the same with suffering? This is the entire source of the word "repugnant", because it sounds like you get infinite suffering with finite happiness. That does not follow from anything utilitarianism stipulates; you've simply created an awful world and falsely called it utilitarianism. Try to imagine all these people living a nearly completely neutral life, erring a bit on the happier side, and it suddenly doesn't sound so bad.

Secondly, you're ignoring the fact that people can create happiness for others. What fixed finite "happiness" resource are we divvying up here? Surely a world with 10 billion people has more great works of art for all to enjoy than a world with 10 people, not to mention far less loneliness. It's crazy to think the total amount of happiness to distribute is independent of the world population.

There are many more reasonable objections to even the existence of that so-called "conclusion" without even starting on the many ways of dealing with it.


Your post reminds me of xenophobes who lament the arrival of immigrants. The immigrants are taking their jobs they are saying. Such a viewpoint can be countered with the imaginary scenario where you live in a country with only 2 people. How well are they doing? There are no stores to buy goodies from because who would create such a store for just 2 people? Perhaps an immigrant, could open a deli!

When there are more immigrants who are allowed to work, the immigrants will make some money for themselves. What do they do with that money? They spend it, which grows the economy. Our economy, not some other country's economy.

If you were the only living person on this planet you would be in trouble. Thank God for other people being there too.


> What do they do with that money? They spend it, which grows the economy. Our economy, not some other country's economy.

I'm going to guess you've never spoken to anyone who is sending money back to their family in their original country with every paycheck.

Not really the point of this conversation I guess but... yeah. It does happen more than you probably think. To the point where malls in my area have kiosks for wiring money to other countries for cheap.


Wouldn’t they be sending left-over money ie, money after spending locally, back to their home country?

I can’t imagine a lot of people out there who send all their money back home without spending some of it locally for self sustenance.


> I'm going to guess you've never spoken to anyone who is sending money back to their family in their original country with every paycheck.

If that sent money ever comes back to the domestic economy, then you are back to the previous situation.

If it doesn't come back, that's even better: because then your central bank can print more money to make up for the disappearance. Essentially, you got the foreigner to perform services in return for some ink and paper.


That's not true, most of the money is spent here and very little to take care of the family that's been left back home. Otherwise how can they survive here, think about immigrant kids education, housing, healthcare, retirement.


I agree this happens a lot, but isn't this an area of financial life that is regulated, governed, and monitored by states today? I am not familiar with the policies or regulations in play here but this seems very addressable by the institutions and regulatory bodies that exist today in most (all?) countries. A solved problem, in other words.


For example the income tax or other taxes already capture a portion of foreign workers' incomes that the state wants to capture without discouraging workers too much and working elsewhere, according to their economic models and policies.

I suppose different countries have different strategies on this, rich countries are trying to benefit from foreign labor and capture some taxes from them while less rich countries are trying to increase their haul by encouraging their citizens to become foreign workers (example: Philippines).


Right and workers are the ones who build the country. The more workers there are the bigger the gross domestic product. Now if some money enriches people in other countries that means they will buy more Coca-Cola! This planet is not a zero-sum game. When people help each other that helps everybody. The exception is of course countries which start wars.

I mean if USA had never allowed immigrants to come here where would we be economically?


Your logic: let's reduce immigration because immigrants send remittances to family members who are not permitted by our immigration laws to join their family members in this country, which would increase immigration numbers further while reducing capital offshoring.


It seems you completely misunderstood the parent comment. They are arguing against the existence of the repugnant conclusion, by pointing out that happiness -- like the economy -- is not actually a finite pie to divvy up.


Your scenario leaves a lot to be desired.

Yeah two people only.

Well Your scenario can easily be countered with the imaginary scenario that you have a town with 1 billion residents, far too little housing, no green space left due to trying to provide housing and the city only has natural resources for perhaps 300.000.000.

Now 100.000.000 immigrants arrive. There is not enough food, water, hygiene. Hopefully, opening delis will solve the issue.

Yes, it is absurd. But no more so than a world of 2.

History though does prove your theory right. When proud and brave Europeans immigrated to what would become the United States.

"When they arrive there are no stores to buy goodies from because who would create such a store for just 2 people? Perhaps an immigrant, could open a deli!""

Thankfully for the native people's immigrants came in to create a consumer capitalist culture.

Can you imagine the utter horror if they native peoples were allowed to keep their versions of society going and develop it the way they wanted. They sure were blessed by the immigrants. A lot the natives' peoples also became xenophobes and we sure now what bastards' xenophobes are.


This is silly because you don't have to imagine any scenarios. Good economic models are based on empirical data, not on imagining things, and "immigrants don't increase unemployment or decrease wages" is just about the strongest empirical result there is.

The reason being that someone moving closer to you increases demand for labor more than supply, and immigrants generally have complimentary (slightly different) skills to natives. So the person whose labor the immigrant most competes with is another immigrant.

https://www.noahpinion.blog/p/why-immigration-doesnt-reduce-...

As for remittances/brain drain, there are certainly theoretical issues but it seems to be okay in the end because it boosts investment in the originating country.

https://www.noahpinion.blog/p/why-skilled-immigration-usuall...


And if you subscribe to MMT you should love immigrants because they're free "unemployed" people you didn't have to spend money raising so you have greater runway to print more money without inflation.


And if they are illegal immigrants, even better, they will work hard for a very small salary. They even do very little crime because they don't want to be thrown out of the country.

I think in general immigration from say Mexico to US is a loss for Mexico and a win for US.


Country of two people is not hard to imagine, think of a desert island with Robinson and Crusoe.

Overpopulation could be a problem but when people become better off financially they for some reason tend to have fewer children.


All of this having been said, replacing happiness with revenue makes chasing marginal users make a lot of sense.

If you have a sure-fire way to get half the people on the planet to give you $1, you can afford a yacht. Even if it means the tool you make for them only induces them to ever give you that $1 and not more... Why do you care? You have a yacht now. You can contemplate whether you should have made them something more useful from the relative safety and comfort of your yacht.


Yes, more generally, I’m reminded of David Chapman’s essay, “No Cosmic Meaning” [1]. Thought experiments are a good way to depress yourself if you take them seriously.

But I think that utilitarianism has a vague but somewhat related problem in treating “utility” as a one-dimensional quantity that you can add up? There are times when adding things together and doing comparisons makes a kind of sense, but it’s an abstraction. Nothing says you ought to quantify and add things up in a particular way, and utilitarianism doesn’t provide a way of resolving disputes about quantifying and adding. Not that it really tries, because it’s furthermore a metaphor about doing math, which isn’t the same thing as doing math.

[1] https://meaningness.com/no-cosmic-meaning


The big problem with utilitarinism, is that people think that a preference function for the utilitariam that is creating a given world is something simple. Then some people are like, no, it's more complex, we need to take into account X, Y and Z. But the truth is, no human being is capable of defining a good utility function, even for ourselves. We don't know all the parameters, and we don't know how to combine those parameters to add them up. So I would say that formal, proper utilitarinism, is not a metaphor for math: it is math. But is right now in the area of non constructive math.

Maybe our descedants will elevate it outside of that with computers someday. Cause the human brain with just pieces of papers and text, probably cannot do it.


Also utilitarinism was created by people who were utterly unaware that the world is fundamentally chaotic. Instead they thought it could be represented by a system of linear equations.

It's fundamentally broken in practice.


> utilitarianism has a vague but somewhat related problem in treating “utility” as a one-dimensional quantity that you can add up?

Yes, it does. This is one of the most common (and in my view, most compelling) criticisms of utilitarianism.


One of the very muddled thoughts I have in my head, along with Goodhart's Law and AIs which blissfully attempt to convert the universe into paperclips, is that having a single function maximized as a goal seems to give rise to these bizarre scenarios if you begin to scan for their existence.

I have started to think that you need at least two functions, in tension, to help forestall this kind of runaway behavior.


Even "two functions, in tension" still assumes that you can capture values as functions at all. But the reason ethics and morality are hard in the first place is that there are no such functions. We humans have multiple incommensurable, and sometimes incompatible, values that we can't capture with numbers. That means it's not even a matter of not being able to compute the "right" answer; it's that the very concept of there being a single "right" answer doesn't seem to work.


I think that's what it will approach in the limit, yes, if you are talking about humans. For AIs, I think it will be somewhat less so, and that it would be preferable for the sake of predictability.


> a situation that doesn't follow at all from utilitarianism

Except that it does according to many utilitarians. That's why it has been a topic of discussion for so long.

> you're imagining dividing happiness among more people, but imagining them all with the same amount of suffering

No. "Utility" includes both positive (happiness) and negative (suffering) contributions. The "utility" numbers that are quoted in the argument are the net utility numbers after all happiness and all suffering have been included.

> You're picturing a drudging life where people work all day and have barely any source of happiness.

Or a life with a lot of happiness but also a lot of suffering, so the net utility is close to zero, because the suffering almost cancels out the happiness. (This is one of the key areas where many if not most people's moral intuitions. including mine, do not match up with utilitarianism: happiness and suffering aren't mere numbers and you can't just blithely have them cancel each other that way.)

> if you can magically divide up some total amount of happiness, why not the same with suffering?

Nothing in the argument contradicts this. The argument is not assuming a specific scenario; it is considering all possible scenarios and finding comparisons between them that follow from utilitiarianism, but do not match up with most people's moral intuitions. It is no answer to the argument to point out that there are other comparisons that don't suffer from this problem; utilitarianism claims to be a universal theory of morality and ethics, so if any possible scenario is a problem for it, then it has a problem.

> you're ignoring the fact that people can create happiness for others

But "can" isn't the same as "will". The repugnant conclusion takes into account the possibility that adding more people might not have this consequence. The whole point is that utilitarianism (or more precisely the Total Utility version of utilitarianism, which is the most common version) says that a world with more people is better even if the happiness per person goes down, possibly way down (depending on how many more people you add), which is not what most people's moral intuitions say.

> It's crazy to think the total amount of happiness to distribute is independent of the world population.

The argument never makes this assumption. You are attacking a straw man. Indeed, in the comparisons cited in the argument, the worlds with more people have more total happiness--just less happiness per person.


Thank you for this! I have very similar thoughts. Felt like I was going crazy each time I saw these types of conversations sparked by mention of the "repugnant" conclusion...


Here's a simpler way to phrase the problem.

The current world population is about 8 billion.

By this argument, and also by your argument, it should actually be 999 billion. Or a number even higher than that.

The conclusion boils down to:

1. Find maximum population number earth can support.

2. Hit that number.

I do think that, when put this way, it seems simplistic.


To be fair, boiling something down to a simple statement does indeed tend to produce simplistic statements


Here's an even simpler way to phrase the problem.

The current world population is about 8 billion.

By my argument it should be 2 billion.

Your argument is therefore rather foolish.


The Repugnant Conclusion is one of those silly problems in philosophy that don’t make much sense outside of academics.

Utilitarianism ought to be about maximizing the happiness (total and distribution) of an existing population. Merging it with natalism isn’t realistic or meaningful, so we end up with these population morality debates. The happiness of a unconceived possible human is null (not the same as zero!)

Compare to Rawls’s Original Position, which uses an unborn person to make the hypothetical work but is ultimately about optimizing for happiness in an existing population.

We really shouldn’t get ourselves tied into knots about the possibility of pumping out trillions of humans because an algorithm says they’ll be marginally net content. That’s not the end goal of any reasonable, practical, or sane system of ethics.


Rawls's original position and the veil-of-ignorance he uses to support it has a major weakness: it's a time-slice theory. Your whole argument rests on it. You're talking about the "existing population" at some particular moment in time.

Here I am replying to you 3 hours later. In the mean time, close to 20,000 people have died around the world [1]. Thousands more have been born. So if we're to move outside the realm of academics, as you put it, we force ourselves to contend with the fact that there is no "existing population" to maximize happiness for. The population is perhaps better thought of as a river of people, always flowing out to sea.

The Repugnant Conclusion is relevant, perhaps now more than at any time in the past, because we've begun to grasp -- scientifically, if not politically -- the finitude of earth's resources. By continuing the way we are, toward ever-increasing consumption of resources and ever-growing inequality, we are racing towards humanitarian disasters the likes of which have never been seen before.

[1] https://www.medindia.net/patients/calculators/world-death-cl...


> By continuing the way we are, toward ever-increasing consumption of resources and ever-growing inequality, we are racing towards humanitarian disasters the likes of which have never been seen before.

We aren't doing that. Increasing human populations don't increase resource consumption because 1. resources aren't always consumed per-capita 2. we have the spare human capital to invent new cleaner technology.

It's backwards actually - decreasing populations, making for a deflating economy, encourage consumption rather than productivity investment. That's how so many countries managed to deforest themselves when wood fires were still state of the art.

Also, "resources are finite" isn't an argument against growth because if you don't grow /the resources are still finite/. So all you're saying is we're going to die someday. We know that.


I mostly agree. However:

> That's how so many countries managed to deforest themselves when wood fires were still state of the art.

It was mostly ship building that deforested eg the countries around the Mediterranean and Britain. Firewood was mostly harvested reasonably sustainably from managed areas like coppices in many places. See https://en.wikipedia.org/wiki/Coppicing


> By continuing the way we are, toward ever-increasing consumption of resources and ever-growing inequality, we are racing towards humanitarian disasters the likes of which have never been seen before.

What do you mean by ever growing inequality? Global inequality has decreased in recent decades. (Thanks largely to China and to a lesser extent India moving from abject poverty to middle income status.)

By some measures we are also using less resources than we used to. Eg peak resource usage in the US, as measured in total _mass_ of stuff flowing through the economy, peaked sometime in the 1930s.

Have a look at the amount of energy used per dollar of GDP produced, too. Eg at https://yearbook.enerdata.net/total-energy/world-energy-inte...


> Utilitarianism ought to be about maximizing the happiness (total and distribution) of an existing population.

That's a somewhat-similar alternative to utilitarianism. Which has its own kind of repugnant conclusions, in part as a result of the same flawed premises: that utililty experienced by different people is a quantity with common objective units that can meaningfully summed, and given that, morality is defined by maximizing that sum across some universe of analysis. It differs from by-the-book utilitarianism in changing the universe of analysis, which changes the precise problems the flawed premises produce, but doesn't really solve anything fundamentally.

> Compare to Rawls’s Original Position, which uses an unborn person to make the hypothetical work but is ultimately about optimizing for happiness in an existing population.

No, its not; the Original Position neither deals with a fixed existing population nor is about optimizing for happiness in the summed-utility sense. Its more about optimizing the risk adjusted distribution of the opportunity for happiness.


>We really shouldn’t get ourselves tied into knots about the possibility of pumping out trillions of humans because an algorithm says they’ll be marginally net content. That’s not the end goal of any reasonable, practical, or sane system of ethics.

Are you sure you aren't sharing the world with people who do not adhere to reasonable, practical, or sane system of ethics?

Because, ngl, lately, I'm not so sure I can offer an affirmative on that one, making "Getting tied into knots about the possibility of pumping out trillions of humans because an algorithm says they’ll be marginally net content" a reasonable thing to be trying to cut a la the Gordian knot.

After all, that very thing, "pump out trillions of humans because some algorithm (genetics, instincts, & culture taken collectively) says they'll be marginally more content" is modus operandi for humanity, with shockingly little appreciation for the externalities therein involved.


I think you might be missing a big part of what this sort of philosophy is really about.

> Utilitarianism ought to be about maximizing the happiness (total and distribution) of an existing population

For those who accept your claim above, lots of stuff follows, but your claim is a bold assertion that isn't accepted by everyone involved, or even many people involved.

The repugnant conclusion is a thought experiment where one starts with certain stripped-down claims not including yours here and follow it to its logical conclusion. This is worth doing because many people find it plausible that those axioms define a good ethical system, but the fact they require the repugnant conclusion causes people to say "Something in here seems to be wrong or incomplete." People have proposed many alternate axioms, and your take is just one which isn't popular.

I suspect part of the reason yours isn't popular is

- People seek axiological answers from their ethical systems, so they wish to be able to answer "Are these two unlike worlds better?" -- even if they aren't asking "What action should I take?" Many people want to know "What is better?" so they explore questions of what are better, period, and something they want is to always to have such questions be answerable. Some folks have explored a concept along the lines of yours, where sometimes there just isn't a comparison available, but giving up on being able to compare every pair isn't popular.

- We actually make decisions or imagine the ability to make future real decisions that result in there being more or fewer persons. Is it right to have kids? Is it right to subsidize childbearing? Is it right to attempt to make a ton of virtual persons?

> The happiness of a unconceived possible human is null (not the same as zero!)

Okay, if you say "Total utilitarianism (and all similar things) are wrong", then of course you don't reach the repugnant conclusion via Parfit's argument. "A, B, C implies D", "Well, not B" is not a very interesting argument here.

Your null posing also doesn't really answer how we _should_ handle questions of what to do that result in persons being created or destroyed.

> We really shouldn’t get ourselves tied into knots about the possibility of pumping out trillions of humans because an algorithm says they’ll be marginally net content. That’s not the end goal of any reasonable, practical, or sane system of ethics.

Okay, what is the end goal? If you'll enlighten us, then we can all know.

Until then, folks are going to keep trying to figure it out. Parfit explored a system that many people might have thought sounded good on its premises, but proved it led to the repugnant conclusion. The normal reaction is, "Okay, that wasn't the right recipe. Let's keep looking. I want to find a better recipe so I know what to do in hard, real cases." Since such folks rejected the ethical system because it led to the repugnant conclusion, they could be less confident in its prescriptions in more practical situations -- they know that the premises of the system don't reflect what they want to adopt as their ethical system.


>The Repugnant Conclusion is one of those silly problems in philosophy that don’t make much sense outside of academics.

Not even for academics. It's something for "rational"-bros.


(Real, academic philosophers actually care about the case, too.)


Only because the practice (in the US mostly) has been watered down a lot to include all kinds of rational-bros in the tradition of "analytical philosophy", usually also involved in the same circles and arguments with the wide rational-bro community.

Then again the opposite side has also devolved into a parody of 20th century contintenal philosophical concerns with no saving grace.


Many versions of utilitarianism never specified the function to compute the sum for the many. Your example assumes that the function is simple addition, but others have been proposed that reflect some of the complexities of the human condition a little more explicitly (e.g. sad neighbors make neighbors sad).


Reinforcing your point, Peter Singer, philosopher and noted utilitarian, has explicitly said that he weights misery far more than happiness in his own framework. From a personal level, he said he'd give up the 10 best days of his life to remove the one worst day of his life (or something like that).

All of his work with effective altruism is aimed at reducing suffering of those worst off in the world and spends no time with how to make the well off even happier.


I hadn’t heard that about Singer’s philosophy (unsurprisingly as I’ve read very little of his work). It’s interesting for me in that it lines up with Kahnemann & Tversky’s “losses loom larger than gains” heuristic in psychology.


Singer publishes his book, "The Life You Can Save" for free -- it was revised for the 10th anniversary. It is available in various formats, including an audio book. It is a short, easy read. It is also one of the most impactful books I've ever read.

https://www.thelifeyoucansave.org/

Along the header there is an item "free book" if you want to get the book.

In short, there are a few theses in the book that combine in a compelling way. This isn't his summary, this is the summary I got out of reading it. (1) the suffering of someone you don't know is just as real as the suffering of someone you do know. (2) there are many worthy causes, but we aren't allocating enough resources to address them all, so it is best to allocate them such that they do the most good for the worst off (3) if you are reading hacker news, you are likely in the 1% (worldwide) and your primary needs are already being met and you can make meaningful contributions to help those most in need without affecting your lifestyle much.

Each chapter tackles a topic. For instance, part of it brings up many the counter arguments about why someone might not want to donate and then debunks them. Eg, in a poll, most US citizens believe that 10-20% of their taxes are going to foreign, and say about half that, say 5%, would be reasonable. In fact it is well under 1%. Or people will say there have always been poor people and there will always be poor people (counter argument: to the people who are helped, aid makes a huge difference to them personally, and there have actually been great strides in the past 30 years at reducing global poverty).

Another chapter talks about the typical feel good news item about someone in the community who has gone blind and so the community pitches in to pay for a seeing eye dog for the person (it costs up to $50K to train and vet such dogs). Yet that same $50K could have prevented river blindness for thousands of children, or been enough to perform cataract surgery on thousands of blind adults.

People often say, why give to a charity in Africa? It will just end up in someone's pocket before it helps anyway. The book talks about the effective altruism movement and describes how charities are vetted and monitored. The website above has links to many such vetted charities.

Another chapter talks about where to draw the line? Sure, I can afford $700 to pay for corrective surgery for a woman suffering from fistula and feel good about myself. But in reality I could afford another $700, then another $700, etc. Do I need to keep donating to those worse off until I am one of the worse off people? (spoiler: no)


I don't know who to be more angry at: the idiots who think that Sam Bankman-Fried is effective altruism rather than the version of it above, or SBF.


As an aside, this is why buying insurance, despite being a financially bad bet (or the insurers would go out of business), actually is a sensible thing to do from a quality of life perspective.


Insurance isn't a financially bad bet. They're providing a service (not needing to maintain the liquidity of replacement costs) in exchange for a fixed monthly fee. It's cheaper for me to grow my own food but it's not a "bad bet" to not be a subsistence farmer and buy my food at the grocery store even though many people are making money off my purchase up the chain. I get to use my money and time for something more productive.


You are right for catastrophic insurance, ie insurance that covers outlays that would financially ruin you, or at least be majorly inconvenient.

Many people buy (or are forced to buy) insurance that covers minor outlays, too.

Using that kind of insurance has pretty big (relative) overheads not just in terms of money but also in terms of annoying paperwork and bureaucracy.

Some of the worst offenders are probably health insurance plans that include a fixed hundred bucks allowance towards new glasses every year. They might just as well charge you hundred bucks less in premiums and strike that allowance. (Unless in cases where that scheme is a tax dodge.)


Insurers are often mutually owned by their customers, so they don't need to profit.


That doesn't make much of a difference.

Profit is only one part of the overhead. They also have to pay agents, adjusters, underwriters, managers, office stationary, postage, fraud investigators, lawyers, taxes, interest on bonds etc.

Similar for hospitals etc. Profit, ie cost of equity capital, is usually (but not always) a relatively small part of an organisation's overall cost structure. And the non-profit alternatives typically don't have meaningfully lower costs.


Yeah, utilitarianism means you want to act in a way that's beneficial to most people.

There's many ways you can interpret that, though.

But I think if you say, before we had 1 apple per person, and now we have 2x as many apples, but they're all owned by one person - that's hard to argue it's utilitarian.

If before you had 100 apples, and everyone who wanted one had one, and now you have 10,000 apples distributed to people at random, but only 1 in 100 people who wants one has one - that also seems hard to argue as utilitarian.

Businesses are value maximization functions. They'll only be utilitarian if that happens to maximize value.

In the case of software - if you go from 1m users to 10m users - that doesn't imply utilitarianism. It implies that was good for gaming some metric - which more often than not these days is growth, not profit.


Which conceivable method of summing is the least problematic? Depending on the summing method you might find yourself advocating creating as many people as possible with positive utility, or eliminate everyone with below-average utility, etc.


Utility is very complicated and summing might not even be possible. Folks have argued for completely different utility systems, such as cardinal utility where utility is modeled purely as relations instead of something that is isomorphic to a real. Even going by the mainstream view of ordinal utility, utility tends to be a convex function (simplistically, having 1 food is much better than having no food, but having 1000 food isn't that much better than having 500 food.) Modeling utility as something purely isomorphic to reals gives it all the fun paradoxes that we know the reals have and can be used to create some really wacky results. The "repugnant conclusion" is a direct consequence of that.


Assuming linearity of utility either in individuals or in aggregation is a very common straw man of utilitarianism.


Doesn't have to be linear, ANY strictly increasing function for aggregating the utility leads to the same conclusion.


> (e.g. sad neighbors make neighbors sad)

I much prefer, "I'd rather have a bottle in front of me than a frontal lobotomy." At least in that case nobody will confuse a trucker hat slogan for a viable system of ethics.


Tyranny of the marginal user is a riff on the Nassim Taleb classic "The Most Intolerant Wins: The Dictatorship of the Small Minority":

https://medium.com/incerto/the-most-intolerant-wins-the-dict...


One way to deal with this problem is to ask why do we use the arithmetic sum to calculate the total happiness?. There are plenty of ways this can go. Say, if you believe that two very happy people are better than four half as happy people, then you can define this sum function as sum(happiness_per_person) / number_of_people. Of course, this isn't the only way.

Utilitarianism opens a lot of questions about comparability of utility (or happiness) of different people as well as summation. Is it a totally ordered set? Is it a partially ordered set? Perhaps utility is incomparable (that'd be sad and kind of defeat the whole doctrine, but still).

Also, can unhappiness be compensated by happiness? We unthinkingly rush to treat unhappiness as we would negative numbers and try to sum that with happiness, but what if it doesn't work? What if the person who has no happiness or unhappiness isn't in the same place as the person who is equally happy and unhappy (their dog died, but they found a million $ on the same day)?

A more typical classroom question would be about chopping up a healthy person for organs to fix X unhealthy people -- is there a number of unhealthy people which would justify killing a healthy person for spare parts?


Why would anyone think that a large overall pool of happiness is somehow better than a high per capita happiness? This seems like the kind of thing that's incredibly obvious to everyone but the academic philosopher.


They do not, thats the point. If you start with a simple and reasonable sounding premise ('it is ethically correct to choose the option that maximizes happiness') but it leads to obviously absurd or inhuman outcomes then you might not want to adopt those principles.

Your second sentence rankles the hell out of me, you're only able to make that snap judgement to this because of your exposure to academic philosophy (where do you think that example that demonstrates the problem so clearly comes from?), but are completely unaware of that.

The bullshitters aren't puzzling at seemingly simple things, they're writing content free fluff.


Maximizing for per-capita happiness just leads to the other end of the same problem - fewer and fewer people with the same "happiness units" spread among them. Thus we should strictly limit breeding and kill people at age X+5 (X always being my age, of course).

It's actually a hard problem to design a perfect moral system, that's why people have been trying for literally thousands of years.


I suggest in general, when approaching a conclusion of a field that you find unintuitive or overcomplicated, to try to recognise that thought pattern and swallow your pride. Its an incredibly common reaction of educated people in one area to see another area and be like "wow why are they overcomplicating it so much they must all be blind to the obvious problems" as though literally every new student in that field doesn't ask the same questions they're asking. Heck I do it all the time, most recently when starting learning music theory.

You may feel so certain that they're just too wrapped up in their nonsense that they can't see what you see. But at the very least you will have to learn it the way they learned it if you want to be effective at communicating with them to articulate what you think is wrong and convince people. And in doing so you'll likely realise that far from some unquestioned truth, every conclusion in the field is subject to vigorous debate, and hundreds and thousands of pages and criticisms and rebuttals exist for any conclusion you care about. And for it to get as big as it is such that you, a person hearing about it from outside, there must at least be something interesting and worth examining going on there.

For a prime example, see all the retired engineers who decide that because they can't read a paper on quantum physics with their calculus background, the phsyicists must be overcomplicating it, and bombard them constantly with mail about their own crackpot theories. You don't want to be that person.


It's just a question of if you value other people existing or not. If you don't, focus on per-capita happiness, if you do then you focus on meeting a minimum threshold of happiness for everyone.

I don't see how you couldn't value other people existing – I think they have just as much of a right to experience the universe as I do.


There's a vast chasm between "other people deserve to exist" and "we should 100x our population in order to increase the marginal happiness pool".


Alternately, there isn't.


Has that belief led you to a lifestyle in which you are just barely happier than miserable so that you can lift as many others as you can out of misery?


No, but doing so would be consistent with my beliefs and I think it would be considered admirable to most people.


In this particular case, it's because the success of an ad-funded service depends on the amount of users it has.

If you don't like the repugnant conclusion you have to change something in the conditions of the environment so that you make it not be true. Arguing against it and calling your refutation obvious doesn't do anything.


That is an incredibly long bow to draw. Corporations are optimising for their own profits, not anyone's happiness.


I agree. The math that applies to corporate profits is not the same that should apply for human happiness.

But we have to acknowledge that the weird philosophical thought experiment that can't possibly convince anyone except weird philosophers turned out to be convincing to other entities after all.

Compare the trolley problem, a famous thought experiment that people used to laugh at, up until a couple of years when suddenly important people began to ask important questions like "should we relax the safety standards for potentially life-saving vaccines" and "how much larger than Y does X need to be so that preventing X functionally illiterate children are worth the price of Y dead children"


First, the phrasing is confusing, because it's not clear whether people with very low happiness measured in terms of N are what we consider unhappy/sad, which is actually negative utility. I believe with this measure, positive N means someone is more happy than they are unhappy.

Second, what's "obvious to everyone" is just based on how you're phrasing the question. If you suggested to people it would be better if the population were just one deliriously happy person with N=50, vs 5 happy people with N=10.1, people would say obviously it would be better to spread the wealth and increase overall happiness.


The problem is that the "repugnant conclusion" is a matter of definitions. A moral theory is (basically) freely chosen: you can change the definitions whenever you like.

Not so for B2C SaaS. The utilities are always measured in dollars and they always aggregate by simple addition. You can't simply redefine the problem away by changing the economic assumptions, because they exist in physical space and not in theory space.


I've never understood this problem. To me, it seems that since you've defined a minimum "worth living" amount of happiness and unbounded population, it makes complete sense that the answer would be that it is better to have lots of people whose life is worth living rather than fewer. Is it not tautological?

Like it seems like you have to take "worth living" seriously, since that is the element that is doing all the work. If it's worth living, you've factored in everything that matters already.


If you pack the whole problem into a definition of "worth living", then you're right. But the premise is that there is a range from extreme misery through neutral through extremely happy. The repugnant conclusion is that it is better to have many people in a state that is barely above neutral.


I'm not the one packing it, the setup of the problem does it. "Barely above neutral" means you've picked an acceptable state. And then we are supposed to consider that acceptable state "repugnant"?


There's a comparison. If the scale goes from -100 to +100, the conclusion is that if we have 8 billion people in the world with average happiness of +10, it is better to immiserate them in order to have 80 billion with average happiness +1.01.

It's not that the acceptable state of 1.01 is repugnant, it's that the conclusion seems counterintuitive and ethically problematic to many people, as it suggests that we should prefer creating a massive population of people who are barely happy over a smaller population of people who are very happy.


I guess I just don't understand how if your axioms are 1) X is an acceptable level of happiness and 2) more people are better than fewer it is in any way surprising or problematic to end up with infinite people at happiness X.

Perhaps people don't see that (2) is a part of the premise?


It's more that after seeing that result of starting with those premises they don't like the 2 premises anymore. It would be like me really liking the experience of eating potator chips all day right up until the point that I discovered it had a lot of adverse health effects. I might no long like eating them as much.


Because 1 is not one of the axioms. The axioms are 1) There is a range of experience between worst possible misery and best possible happiness and 2) more people who are just barely happy is better than fewer people who are much happier.

I don't understand why you're insisting on a binary distinction of acceptable vs. not acceptable. With that assumption there is no repugnant conclusion.


1 is one of the axioms because a binary cutoff is built into the premise.


I may have taken you a little too literally when you wrote that you didn't understand the problem. Perhaps what you're saying is that the conclusion is not repugnant to you and that the conclusion is neither counterintuitive nor ethically problematic.

Consequently you believe that it is better for a large number of people to exist in a state barely better than misery than for a smaller number of people to experience a greater degree of happiness.

Fair enough.


I suppose that is a fair characterization. I would say that I still think it's tautological. Obviously it's a synthetic situation that involves infinity, so real-world applications are difficult to evaluate.

But I just don't get why people see it as an ethical dilemma – the conclusion is a perfectly sensible outcome of the setup. The conclusion is just a restatement of the premise – a maximization of population over a maximization of happiness. Thats why it seems tautological to me, the math of it is perfunctory and reveals nothing. If you cared about maximizing happiness more than population you would have to modify the setup. The trade-off is built into the premise.


> This is the conclusion of utilitarianism, where if you have N people each with 10 happiness, well then, it would be better to have 10N people with 1.1 happiness, or 100N people with 0.111 happiness, until you have infinite people with barely any happiness

1) Population isn't infinite, you can't continue this for too long

2) Your assumption completely depends on how costly it is to increase +1 happiness and to increase +1 user, you don't even mention it. And these costs are not fixed, it increases, so even if it is cheaper to add +1 user in the beginning, it will not continue to be cheaper indefinitely

So, nothing is preventing you from increasing happiness at the same time you increase users.


I really don't see the issue with your happiness split. You have 10 people, and they're are equally unhappy.

This is perfect, because now they are all equally incentivized to do something about it. They're motivated to work together and collaborate for change.

If you do any other split where some people will be very happy and others very unhappy, you've now created certain category of people who are incentivized to maintain the current system and repress any desire for change from the unhappy people.


Every time I've engaged in debate over this, it always comes down to believing that the world is zero sum and there is a limited amount of "happiness" that can be distributed.

That may be true for some things, but for many decisions it is not true.

There is enough food to feed everyone if we choose to distribute it properly. There is enough housing to house everyone. etc. etc.

There may not be enough cardiologists or Dali originals ...


> infinite people with barely any happiness

That reminds me of the SMBC "Existifier" comic, which satirizes the idea that merely helping something exist is morally positive.

https://www.smbc-comics.com/comic/existence


AKA the Repugnantly Ignorant in the Human-Ways Nerd's Idea of Ethics conclusion!


There is a minimum happiness threshold mH. We can increase population P until happiness H reaches mH, give or take some depending on how close you want to get to mH.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: