The underlying assumption is that Bentham is a true act utilitarian yet simultaneously has 10 pounds in his pocket that he can stand to lose without much harm. If he truly were an act utilitarian, the utility of the 10 pounds remaining in Bentham's possession must be so high that it outweighs the mugger losing their finger, otherwise Bentham would have already spent it on something similarly utility maximizing. Clearly that 10 pounds was already destined to maximize utility such as staving off Bentham's hunger and avoiding his own death or the death of others.
Meanwhile the utility of the mugger's finger is questionable. The pain of losing the finger is the only real cost. If they are just a petty criminal, the loss of their finger will probably reduce their ability to commit crimes and prevent him from inflicting as much suffering on others as he otherwise would have. Maybe losing his finger actually increases utility.
Bentham: "I'm sorry Mr. Mugger but I am on my way to spend this 10 pounds on a supply of fever medication for the orphanage and I am afraid that if I don't procure the medicine, several children will die or suffer fever madness. So when faced with calculating the utility of this situation I must weigh your finger against the lives of these children. Good day. And if the experience of cutting your finger off makes you question your own deontological beliefs, feel free to call upon me for some tutoring on the philosophy of Act Utilitarianism."
Any other scenario and Bentham clearly isn't a true Act Utilitarian and would just tell the Mugger to shove his finger up his ass for all Bentham cares. Either strictly apply the rules or don't apply them at all.
It's for this reason that I find all forms of Utilitarian less edifying than I would hope. They can put intuitively moral choices on a clear, explicit footing, but they produce counterintuitive results in less clear cases. Those less clear cases are precisely what you'd want a Utilitarian system to solve, since you don't really need to make much effort to justify clear cases.
They tend to look best on the moral equivalent of PowerPoint slides. But if you look beyond toy examples, nearly everything is too complex to believe that you've really formed a decent model of the situation. Without that model you're stuck where you were without the Utilitarian principles.
It's fun to argue about but I don't think it makes for a pragmatic moral system. And it's easily subverted by people claiming to present a moral case that is in fact incomplete, leading to abhorrent conclusions that they feel rigorously bound by.
> They tend to look best on the moral equivalent of PowerPoint slides.
This is an interesting formulation, when you put that way I imagine most readers have seen some 'moral powerpoint slides' that fall apart on a closer examination.
I get what you mean. Whenever I read one of these scenarios that tries to show some absurd conclusion about a moral theory like utilitarianism, it makes me wonder if there is something about the outlandishness of the situation that makes it hard for our mental heuristics to work properly. In this case it's the idea that Bentham is a perfect act utilitarian, that the mugger will 100% follow through on his promise, that it won't affect anything else in the future besides the immediate suffering etc.
That said, I am doing my best to come up with an example that avoids the problem you mention. If we can imagine a utopia society where everyone is a perfect act utilitarian except the mugger, and all resources are distributed in a totally fair way such that any $10 will buy much less utility than saving a finger, the mugger's tactic would be harder to avoid.
I think the problem I still have with this is that it's basically saying that it's possible for a jerk to take advantage of a bunch of nice people, which isn't that interesting of a conclusion.
> Meanwhile the utility of the mugger's finger is questionable. The pain of losing the finger is the only real cost. If they are just a petty criminal, the loss of their finger will probably reduce their ability to commit crimes and prevent him from inflicting as much suffering on others as he otherwise would have. Maybe losing his finger actually increases utility.
Yes, the mugger removing his own finger, in Bentham's words, would "prevent the happening of mischief, pain, evil, or unhappiness", assuming the mugger is going to continue a life of crime where fingers help him. If the mugger was a pediatric surgeon who was performing a one-time theft (and we can trust him to never do it again) to get some quick cash on his way to work it might work. It doesn't resolve what the money was originally for, but at least that finger could be as important as your medicine.
Yeah, but now we are in rather hilarious territory. Act Utilitarians can be taken advantage of by deontologist pediatric surgeons, but only once per surgeon per utilitarian. Much less of a gotcha than the original formulation of Bentham's Mugging.
The problem here isn't with the main character's moral philosophy, but with his decision theory. He'd be dealing with exactly the same predicament if the mugger were threatening to harm him.
The solution is indeed "don't give in to muggers", but it's possible to define this in a workable way. Suppose the mugger can choose between A (don't try to mug Bentham) or forcing Bentham to choose between B (give in) or C (don't give in). A is the best outcome for Bentham, B the best outcome for the mugger, and C the worst for both. The mugger, therefore, is only incentivized to force the choice if he expects Bentham to go for B; if he expects Bentham to go for C, then it's in his interest to choose A. Bentham, therefore, should have a policy of always choosing C, if it's worse for the mugger than A; if the mugger knows this and responds to incentives (as we see him doing in the story), then he'll choose A, and Bentham wins.
And none of this has anything to do with utilitarianism, except in the respect that utilitarianism requires you to make decisions about which outcomes you want to try to get, just like any other human endeavor.
It does have to do with utilitarianism — if you change the mugger to harming Bentham, the situation is different. In that situation, many other reasonable moral theories would agree with utilitarianism.
In the original situation, where the mugger is harming themselves, the critique is that utilitarianists are required to treat their own interests as exactly the same as other people’s interests. It doesn’t matter if someone is harming themselves in order to provoke some action from you; if your action prevents that harm, you are obligated to do that action (even if you suffer because of it).
Yes, the point of the GP comment is exactly this, if Bentham becomes an agent that goes for C, he also explicitly discourages the mugger from being an agent that would cut off their fingers for a couple of bucks.
Notice that what Bentham is altering is their strategy and not their utility. If they could spend 10 dollars to treat gangrene and save the fingers, they would do it. It's not clear many other morality systems would be as insistent on this as utilitarianism, because practitioners of other moralities curiously form epicycles defending why the status quo is fine anyway, how dare you imply I'm worse at morality.
> if Bentham becomes an agent that goes for C, he also explicitly discourages the mugger
How is this different from saying that if Bentham decides to not adhere to utilitarianism, he is no longer vulnerable to such a mugging? If Bentham always responds C, even when actually confronted with such a scenario (the mugger was not deterred by Bentham's claim), then Bentham is not a utilitarianist.
In other words, the GP is saying: "if Bentham doesn't always maximize the good, he is no longer subject to an agent who can abuse people who always maximize the good." But that is exactly the point -- that utilitarianism is uniquely vulnerable in this manner.
My wording is wrong, because it sounds like I'm saying that Bentham is adopting the policy ad hoc. A better way to state this is that Bentham starts out as an agent that does not give into brinksmanship type games, because a world where brinksmanship type games exist is a substantially worse world than ones where they don't (because net-negative situations will end up happening, it takes effort to set up brinksmanship and good actions do not benefit more from brinksmanship). It's different because by adopting C, Bentham prevents the mugger from mugging, which is a better world than one where the mugger goes on mugging. I don't see any contradiction in utilitarianism here.
If the world where the thought experiment is not true and "mugging" is net positive, calling it mugging then is disingenuous, that's just more optimally allocating resources and is more equivalent to the conversation
"hi bentham i have a cool plan for 10 dollars let me tell you what it is"
"okay i have heard your plan and i think it's a good idea here's 10 bucks"
Except that you are putting the words "mugging" and implying violence so that people view the interaction as more absurd than it actually is.
> It's different because by adopting C, Bentham prevents the mugger from mugging, which is a better world than one where the mugger goes on mugging.
This assumption is wrong. You are assuming that the mugger is also a utilitiarian, so will do cost-benefit analysis, and thus decide not to mug. But that is not necessarily true.
If the mugger mugs anyway, despite mugging being "suboptimal," Bentham ends up in a situation where he has exactly the same choice: either lose $10, or have the mugger cut off their own finger. If Bentham is to follow (act-)utilitarianism precisely, he must pay the mugger $10. (Act-)utilitarianism says that the only thing that matters is the utility of the outcome of your action. It does not matter that Bentham previously committed to not paying the mugger; the fact is, after the mugger "threatens" Bentham, if Bentham does not pay the mugger, total utility is less than if he does pay. So Bentham must break his promise, despite "committing" not to. (Assuming this is some one-off instance and not some kind of iterated game; iteration makes things more complicated.)
If everyone were a utilitarian, then there would be far fewer objections to utilitarianism. (E.g. instead asking people in wealthy countries to donate 90% of their income to charity, we could probably get away with ~5-10%.) Bentham's mugging is a specific objection to utilitarianism that shows how utilitarians are vulnerable to manipulation by people who do not subscribe to utilitarianism.
Also, to be precise, Bentham's mugging does not show a contradiction. It's showing an unintuitive consequence of utilitarianism. That's not the same thing as a contradiction. (If you want to see a contradiction, Stocker has a different critique: https://www.jstor.org/stable/2025782.)
Except that eventually the mugger will run out of fingers (and/or reattaching them will eventually start not working out), so the mugger will be forced to stop mugging. Well, ok, they could start cutting off toes, or threaten some other form of bodily mutilation.
But regardless, giving in to the mugger enables the mugger to continue mugging indefinitely. Not giving in -- assuming the mugger goes through with whatever self-mutilation they've threatened -- will eventually cause the mugger to stop mugging. This would be a net positive, better than allowing the mugger to mug indefinitely.
On top of that, I was disappointed that, in the story, Bentham does actually bring up the idea that capitulating could encourage copycats to run similar schemes, but this rationale for not cooperating is hand-waved away. This is pretty standard "don't negotiate with terrorists" stuff. Giving in just tells the mugger -- and other potential muggers -- that this strategy works. Surely it's more utilitarian to stamp this out at the source, even if it costs the original mugger some fingers.
(But I guess this is in part the point of Bentham being an act utilitarian in the first encounter, as he wouldn't consider the larger implications of his actions, just on the specific, immediate result of the action in front of him.)
> none of this has anything to do with utilitarianism
"Always go for C (or any strategy)" is not in general a utilitarian strategy, so the mugger would not expect Bentham to employ it.
Your argument assumes that the characters have perfect knowledge, but the point of the parody is that utilitarian choices can change as more information is revealed.
Yes, the mugger could have said something like "if I were to promise to cut off my finger unless you gave me £10, would you do it?", Bentham could have have followed up with "if you knew I would reply no to that question, would you make that promise?", the mugger could have replied "no," Bentham could have responded "In that case, no", and the mugger would have walked away. But Bentham doesn't have all the information until he is faced with the loss of a finger which he can prevent by giving up £10. Bentham is obliged to do so, as it maximises the overall good at that (unfortunate) point.
The idea that Bentham can be "trapped" in a situation where he is obliged to cause some small harm to himself in order to prevent a greater harm is the parody of utilitarianism which is at the heart of the story.
The mugger in the story is essentially contriving a situation that turns him into a utility monster. He is arranging that he will derive more benefit from the money than any other plausible application -- by imposing a massive harm on himself if he doesn't get the money. It's relatively straightforward to vary the threat to adjust incentives as necessary -- e.g. the binding deal with the thug later in the story.
One related way to think about this is that, if (non-utilitarian) people are ever able to change their own utility functions, other people's utilitarianism gives them an incentive to do so, because it can make them more likely to get what they want.
For example, if you can learn to feel sadder about something, other people who want to minimize your sadness will acquire an incentive to help you avoid that thing, even at some cost to themselves.
In many moral intuitions, you find the world as it is and then act on it in some way, without other people strategizing about, or being incentivized by, your moral reasoning. But when other people can do those things, you can get very weird outcomes.
The mugger cuts off his own fingers when a different utilitarian doesn't pay him. Given that, and given that he's right back at it after surgery, I don't think it's so clear that he'll "respond to incentives" and stop mugging people if people stop giving in.
After all, one of the premises here is that the mugger is a deontologist. He doesn't care about outcomes.
> Fair enough. But, even so, I worry that giving you the money would set a bad precedent, encouraging copycats to run similar schemes.
I don’t understand how it was logically defeated with escalation as in the story. Would it be wrong for a Utilitarian to continue arguing against this precedent, saying that the decision to be mugged removes overall Utility because now anyone who can be sufficiently convincing can also effectively steal money from Utilitarians. (I guess money changing hands is presumed net neutral in the story?)
As an act utilitarian, the utilitarian was trying to evaluate the consequences of the act, not a rule that could be followed in multiple instances. Therefore, credibly claiming that the act will be a secret removes any consideration of motivating other people or being judged by other people, etc. (Missing from the story was a promise by the mugger not to repeat this with the utilitarian every day).
The mugger is a Deontologist in this scenario and therefore does not lie. If the utilitarian couldn't trust the mugger's promises, the whole scenario would fall apart as they couldn't trust the mugger's promise to cut off their finger.
If we're assuming unforgeable moral-method pins I don't think we should expect intuitions generated in this sort of thought experiment to be a good guide to what we should actually think or do.
No, the mugger getting the money counts as negative. "Now, as an Act Utilitarian, I would happily part with ten pounds if I were convinced that you would bring more utility to the world with that money than I would. The trouble is I know I would put the money to good use myself – whereas you, I surmise, would not."
No, it doesn't. People having money is Good under utilitarianism because they can utilize it no matter which person it is.
Utilitarianism does not benefit from covert insertions of specific moral carve-outs. Surmisal does not impact outcomes only predictions of outcomes. It is not appropriate to make judgments based on surmisal because utilitarianism can only ever look backward at effects to justify actions post-hoc. This is the primary flaw with utilitarianism as a moral philosophy.
I'm also confused why they drop this point. I don't give in to this kind of threat because I expect overall a policy of giving in leads to worse outcomes.
Act utilitarians specifically don't believe in evaluating the overall consequences of a policy. Rule utilitarians do that. That is, in fact, the major difference between the two.
Good point, I phrased it poorly. Because of the effects of the specific action, I think an act utilitarian should still refuse to be mugged in this case.
The point here is largely that reality (at our level) is not something which can be simply solved by the application of a couple of rules, from which Right Action will thenceforth necessarily flow.
Reality is a big, complex, ball of Stuff, and any attempts to impress morality upon it will be met with many corner cases which produce unwanted results unless we spend our time engaged with dealing with what initially look like tiny details.
I'm sure you can find a compromise in the middle of "Mostly follow some vague rules, but when they lead you to what seem to be negative outcomes think about whether it's because you don't enjoy doing the moral thing, or if it's because actually it's led somewhere unpleasant and you need a new rule for this situation."
It's even worse. People are exceptionally good at rationalising whatever it is they want to do and turn them into convenient rules that sound noble and high-minded.
It would be a much less frustrating world if the robber who takes your money would just tell you "hi, I'm taking your money because I can and because I like it" instead of earnestly believing that he deserves it for some complicated reason (pick your example).
I used to be a utilitarian, but it made me morally repulsive, which pushed my friends away from utilitarianism. I had to stop since this had negative utility.
More seriously, any moral theory that strives too much for abstract purity will be vulnerable to adversarial inputs. A blunt and basic theory (common sense) is sufficient to cover all practical situations and will prevent you from looking very dumb by endorsing a fancy theory that fails catastrophically in the real world [1]
SBF did have a very unusual discounting policy, namely "no discounting", in fairness. I'm not aware of anyone other than SBF who bites the "keep double-or-nothing a 51% probability gamble forever, for infinite expected utility and probability 1 of going bust" bullet in favour of keeping going forever. (SBF espoused this policy in March 2022, if I recall correctly, on Conversations with Tyler.)
One main idea of EA is that you should make a lot of money in order to give it away. The obvious problem is that this can serve as a convenient moral justification for greed. SBF explicitly endorsed EA, Will MacAskill vouched for him, and I understand he was widely admired in EA circles. And he turned out to be the perfect incarnation of this problem, admitting himself he just used EA as a thin veil.
What would you count as evidence that effective altruism fails?
Fun fact: I asked ChatGPT to help me translate that to French for private use, pasting there the first part of the conversation.
It started answering, then within seconds my question was replaced with "This content may violate our content policy. If you believe this to be in error, please submit your feedback — your input will aid our research in this area."
Then, seconds after, the still-appearing answer was replaced with the same message.
Doh! The content filter got tripped because "obviously" it's not a philosophical thought experiment about utilitarianism but an evil text about mugging someone, which is an illegal activity. What a time to be alive!
> Here's the thing: there is, clearly, more utility in me keeping my finger than in you keeping your measly ten pounds.
How is this clear? This is one of the things I find strange about academic philosophy. For all the claims about trying to get at a more rigorous understanding of knowledge, the foundation at the end of the day seems to just be human intuition. You read about something like the Chinese Room or Mary’s Room thought experiments, that seem to appeal to immediate human reactions. “We clearly wouldn’t say…” or “No one would think…”
It feels like an act of obfuscation. People realize the fragility of relying on human intuition, and react by trying to dress human intuition up with extreme complexities in order to trick themselves into thinking they’re not relying on human intuition just as much as everyone else.
Professional philosophers understand that many arguments rely on intuition. But they need intuition to create basic premises. Otherwise, if you have no "axioms" in your system of logic, you cannot derive any sentences.
Also, moral philosophy deals with what is right and what is wrong. These are inherently fuzzy notions and they likely require some level of intuitive reasoning. ("It is clearly wrong to kill an innocent person.") I would be extremely surprised if someone could formally define what is right and wrong in a way that captures human intuition.
It's also not worth debating philosophy with people who will argue that $10 is not clearly worth less than a finger. (And if you don't believe that, then we can consider the case with two fingers, or three, or a whole hand, etc.).
> It's also not worth debating philosophy with people who will argue that $10 is not clearly worth less than a finger.
Some of these arguments feel like the equivalent of spending billions to create a state of the art fighter plane and not realizing they forgot to put an engine inside of it.
It’s not $10 vs. “a finger,” it’s $10 vs. the finger of someone who goes about using their fingers to threaten people to give them money. If the difference isn’t immediately obvious, I think it’s time to step back from complex frameworks and take a look at failures with common intuition.
The point is, to a utilitarian, it’s a finger, because part of the setup is that the “mugger” won’t use their finger for bad things in the future.
Maybe not part of this specific dialogue, where the mugger repeatedly asks for rhetorical reasons. But in a case where there is only a single instance of a mugging, the assumption is that the mugger will only mug once.
I think the point here is that it's subverting and redirecting Bentham's own utilitarianism against itself. How does the utilitarian decide which one of those has more utility? That's a rhetorical question and it's sort of immaterial how that question gets answered, because regardless of how they decide, the dialogue is structurally describing how utilitarianism is vulnerable to exploitation of this type.
If I offered to pay £10 for each fresh human finger brought to me, what moral system would say the right thing to do is cut off as many fingers as possible‽ Would you sell your own fingers for £10 each? I think it's very fair to say that any person ever would value their own fingers more than £10.
I used to feel just like that. Then I learned that academic philosophy studies this phenomenon as "metaethics". There are arguments such as yours that would be considered "moral skepticism". Read up on those (or watch a course like https://youtu.be/g3f-Lfm8KNg); I think you'll find these arguments agreeable.
The problem here stems from trying to have some universal utility values for acts. You can't say cutting off a finger is fundamentally worse than losing 10 pounds, even if it frequently would be. I wouldn't give up one of my fingers for 10 pounds, and I think most sane people wouldn't either, but here the mugger is willing to do that. So in this particular instance, the mugger is valuing the utility of keeping his finger at 10 pounds, and thus the decision on whether or not to give it to him is a wash. The moment you start dictating what the utility values are of consequences for other people you get absurd outcomes (e.g. some of you may die, but it's a sacrifice I'm willing to make).
He can say it's worth whatever he wants, but the fact is he is willing to cut off his finger in an effort to get £10, thus £10 must provide equal or greater utility.
> The finger is clearly a case where the utility disparity is obvious.
This is the erroneous assumption that leads to the false conclusion. There's no such thing as an obvious utility disparity. It's a decent heuristic that works fine in the real world, but in this imaginary scenario where a person would actually be willing to cut off their own fingers simply because they have not gained £10, it no longer holds true.
He's not willing to do it. He explicitly does not want to do it and is only going to do it to lower overall utility by more than the £10 can provide to either party. That is, it only makes sense if he values his finger by more than £10.
We're dealing with two different definitions of "willing." You're using willing to mean a desire to do it. I'm using willing to mean he will choose to do it. It's like I desire to quit my job, but I don't actually quit, because I'm not willing. I desire to be fit, but I'm not willing to give up desert.
It doesn't matter how much the mugger claims he doesn't want to lose his fingers, the fact that he will nevertheless choose a path that causes him to lose his fingers of his own volition means that he has set the utility as being lower.
Yes, the premise only makes sense if he values his finger more than £10, but he demonstrably does not, ergo the premise is contradicted.
The most pressing problem facing utilitarians has never been choosing between principled vs. consequentialist utilitarianism. It's how to take a vector of utilities, and turn it into a single utility.
What function do I use? Do I sum them, is it the mean, how about root-mean-squared? Why does your chosen function make more sense than the other options? Can I perform arithmetic on utilities from two different agents, isn't that like adding grams and meters?
> It's how to take a vector of utilities, and turn it into a single utility.
Not just "how", but whether doing such a thing is even possible at all. And even that doesn't push the problem back far enough: first the utilitarian has to assume that utilities, treated as real numbers, are even measurable or well-defined at all.
It requires a total ordering and an aggregation function (and to be useful in the real world rather than purely abstract, a reliable and predictive measuring mechanism, but that's a different issue.) I’m pretty sure (intuitively, haven’t considered a formal argument) if both exist, then there is a representation where utilities can be represented as (a subset of) the reals.
> It requires a total ordering and an aggregation function
Yes. And note that this is true even for just a single person's utilities, i.e., without even getting into the issues of interpersonal comparison. For example, a single person, just to compute their own overall utility (never mind taking into account other people's), has to be able to aggregate their utilities for different things.
> if both exist, then there is a representation where utilities can be represented as (a subset of) the reals.
Yes. In more technical language, total ordering plus an aggregation function means utilities have to be an ordered field, and for any reasonable treatment that field has to have the least upper bound property (i.e., any sequence of members of the field has to have a least upper bound that is also in the field), and the reals are the only set that satisfies those properties.
Can you explain the "for any reasonable treatment" part here? How is the least upper bound property (or something equivalent to it) getting used in quantitative ethical theories?
Because even if we assume that we can assign numbers to a single person's utility for many different things, we cannot assume that the ratios between these utilities will be integers or rational numbers. We have to consider the possibility of utility ratios that are irrational numbers. In treatments like that of Von Neumann, where numerical utilities are assigned meaning by asking hypothetical agents to accept or decline a potentially infinite series of bets, the possibility of utility ratios that are irrational numbers arises as the least upper bound property requirement--because the final utility ratio from the infinite series of bets will be the least upper bound of the series of ratio estimates that arise from the bets.
That also raises a different question about people's introspection and ability to accurately answer questions about their utility and preferences, but that's a bit far afield from the mathematical question.
> Can I perform arithmetic on utilities from two different agents?
This is called "interpersonal utility comparison", and there's a ton of literature on it. Traditionally utilitarians have accepted it, and without it ideas like "sum the utility across everyone" don't make sense.
I think the people you're responding to are taking the modus ponens for your modus tollens, saying that interpersonal utility comparisons are not obviously possible or meaningful, and so aggregating utilities across persons is also not obviously possible or meaningful.
> It's how to take a vector of utilities, and turn it into a single utility.
I mean, that's a problem that lost of people skip to in utilitarianism, but the bigger problem is that utility isn't really measurable in a way that produces a meaningful "vector of utilities" in the first place.
Utilitarianism is supposed to be a strawman theory that you teach in the first week of class in order to show the flaws and build a real theory of ethics the remaining 14 weeks of the semester. SMDH at all these people who didn't get that basic point.
Utilitarianism has always been more about government policy, so yes it's largely inappropriate as the foundation for a system of personal ethics, at least in its basic forms.
But I'm unaware of any university class that has ever attempted to "build a real system of ethics".
That's not what moral philosophy classes do. The whole point is every moral system encounters major showstopper flaws, and how to reconcile those is one of the great unsolved problems of humanity.
It’s amazing how contrived and detached from reality the counterexamples for utilitarianism have to be able to attack even the most basic forms of it. It really makes you think that utilitarianism is a solid principle.
But aren’t the counterexamples largely detached from reality because in reality people adopt other ethical systems/principles to avoid extreme outcomes?
I’m by no means opposing a general morality of optimising for the greater good, and I think on the whole utilitarianism, like other ideological/ethical systems, gets critiqued in comparison to an impossible standard of perfection. My sense is there are some more basic principles that underpin the success and pragmatism of any ethical/ideological system, and that these principles, to your implied point I think, would safeguard utilitarianism as well as other systems.
I think this is implied in the critique some have against utilitarianism, namely that it needs to introduce weighting in order to adjust the morality towards palatable/sensible means and outcomes. But I don’t think any system could avoid those same coping mechanisms.
People do adopt other systems, feel that utilitarianism must be “wrong” for whatever reason, get research grants from people who agree, and produce incredibly unimpressive work.
What basic principles are you thinking of? Even more basic than hedonism, consequentialism, etc.?
Weighing is just one of critiques against utilitarianism, and it’s a valid one. Maybe the extreme happiness of one person isn’t worth mild suffering of 5 people. But pretending that this upends the entirety of this moral framework, and not one of its building blocks (basically the aggregation function) is kinda silly.
Yeah I think we agree that utilitarianism is held to an unreasonable standard. I think contributing to that is some advocates suggesting it’s a solid utopian model to guide all decisions without further refinement and nuance (and I don’t think this is what you’re arguing).
And because it hasn’t been in practice widely adopted in history (unlike e.g. liberalism or Catholicism) the rubber hasn’t hit the road to allow us to understand how it would work practically. I think some other good ideas suffer the same problem/preemptive attack. Indeed any social progress seems to be attacked by a sort of whataboutism or false slippery slope attack.
To your question on basic principles, I think they’re caught in exercises like the trolley problem or the psychological experiments of the 60s: people on the whole don’t want to be responsible for causing harm, they don’t want to see people in their influence of control harmed, they don’t want to feel bad about themselves, they don’t want to be judged/punished by others - even if convinced it’s for the greater good. I’m not saying some people won’t take a fiercely rational or ideological lens, but on the whole people are influenced by some common psychology. And I think actually this is probably good: as much as it hinders “utopian” ideas being realised I think it ensures humanity moderates ideology.
I think without this a strict utilitarianism, eg a robotic approach, would lead to kinds of harm that I wouldn’t support, even if justified to some sort of ends that itself is subjective. But I think with it, an elevation of the greater good would probably be better than many approaches today. For a practical example I think we should permit more people to consensually enrol in promising but risky cancer research and treatments.
To reiterate that same point I think that in practice those factors would probably allow most systems to be successful, and some/many might be better than what we have now.
This is mostly an amusing logic puzzle of the sort Lewis Carroll liked to write, but there is an unstated moral here: utilitarianism requires a metric of utility, and it can be gamed by people who are merely paying lip service (at best) to utilitarianism, opening the door, in the worst cases, to Mill's tyranny of the majority. The global news, on any given day, contains several such cases.
This is known as a "reductio ad absurdum" argument, and isn't contrived at all. It's easy to make a general rule that applies in the majority of cases. To test whether a general rule has flaws, and to improve upon a general rule, it must be tested by applying it to edge cases. The same way that you test a datetime library by picking potential edge cases (e.g. Leap Day, dates before 1970, dates between Feb. 1-13 in 1918 in Russia, etc), you test a philosophical theory by seeing what it predicts in potential edge cases.
This also deliberately avoids introducing irrelevant arguments. By framing it as a mugger who wants to gain money for purely selfish reasons, we deliberately exclude complicating factors from the statement.
* The argument could be framed around donating to the Susan G. Komen Foundation, rather than a mugger. With the controversies it has had [0], it could be argued that these donations may or may not increase total utility, but donations to charities are part of the best possible path. However, using the Susan G. Komen Foundation as an example relies on accepting a premise that it isn't using donations appropriately, and makes the argument dependent on whether that is or isn't the case.
* The argument could be framed around allowing tax exemptions for all self-described charitable foundations, with Stichting INGKA Foundation [1], part of the corporate structure that owns IKEA, playing the narrative role of the mugger. The argument would be that the tax exemptions provided to charitable foundations are necessary for bringing about the best outcomes, but that they can be taken advantage of. Here, the argument would depend on whether you view the corporate structure of INGKA as a legitimate charity.
* Even staying with purely hypothetical answers, we could ask if the mugger going to starve should be mugging be unsuccessful. These could veer into questions of the local economy, food production, and so on, none of which help to test the validity of utilitarianism.
I've heard this described as crafting the least convenient world. That is, whenever there's a question about the hypothetical scenario that would let you avoid an edge case in a theory, update the hypothetical scenario to be the least convenient option. What if the mugger just needs a hug? Nope, too convenient. What if the mugger isn't going to go through with the finger-chopping? Nope, too convenient.
The problem here is that the counterargument is contrived to the point where it is stupid. This article isn't identifying a problem in theory or practice.
In theory a utilitarian is likely comfortable with the in-principle idea that they might need to sacrifice themselves for a greater good. Pointing that out isn't a counterargument against utilitarianism. In practice, no utilitarian would fall for something this dumb. They'd just keep the money and assume (correctly in my view) they missed something in the argument that invalidates the mugger's position. Or, likely, assume the mugger is lying about being an insane deontologist.
> In practice, no utilitarian would fall for something this dumb.
This is the penultimate conclusion of the dialogue as well, that even Bentham would need to admit so many post-hoc deviations from the general rules of Utilitarianism that it ends up being a form of deontology instead. The primary takeaway is then that Utilitarianism works as a rule-of-thumb, but not as an underlying fundamental truth.
No it isn't, the dialog is strawmaning and claims that Bentham would have to abandon utilitarianism.
I'm claiming that the initial scenario where Bentham caves is reasonable, but in practice will never occur. A utilitarian could reasonably believe Bentham's response was correct (I mean, seriously, would you look at someone and refuse to spend $10 to save their finger? You'd be a monster. As the article points out, we're talking literally 1 person). There is no theoretical problem in that scenario. Bentham has maximised utility based on the scenario presented. It was a scenario designed where the clear-cut utility maximisation choice was to sacrifice $10.
The issue is this scenario is an insane hypothetical that cannot occur in practice. There are no deontologists that strict and there are no choices that binary. So we can conclude in alternate universes that we do not inhabit utilitarianism would not work because these muggers would end up with all the money. Alright. Case closed. Not a practical problem. The first act plays out then the article should end with the conclusion concludes "if that could happen then utilitarianism would have a problem. But it can't so oh well. Turns out utilitarianism is a philosophy that works out really equitably in this universe!"
> In practice, no utilitarian would fall for something this dumb.
What you are saying is exactly what the article says, and you are conceding the article's point, which is that nobody actually practices pure utilitarianism.
Do we want to talk about a hypothetical world where deontology was the underlying moral principle? Where, for example, a large agency in charge of approving vaccines decided to delay approval of a life saving because, even though they received the information on November 20th, they scheduled the meeting for December 10-12th dammit, and that's when it'll be done? By potentially delaying several months because, instead of using challenge trials to directly assess the safety of a vaccine by exposing willing volunteers to both the supposed cure and disease, instead gave the cure to a couple of tens of thousands of people, and just waited until enough of them got sick and died to a disease "that would have got them anyway" to gather enough statistics for safety? Which is definitely good, you see, because no one got directly harmed by said agency, even if many more people in the country were dying of this theoretical disease. [0]
Or, even better, what if distribution of this life saving cure was done based on the deontological concept of fairness? Surely, this wouldn't result in limited and highly demanded vaccines being literally thrown away[1] in the name of equity and where vaccination companies wouldn't need to seek approval for something as simple as increasing doses of vaccines in vials. [2]
You know, just all theoretically, since it would be a terrible shame if any of these things happened in the real world, since this is just one specific scenario and I'm sure I can make up various [3] other [4] ways [5] in which not carefully evaluating the consequences of moral actions would turn out poorly, but hey!
I'm sure glad that utilitarianism isn't being entertained more on the margin, since we already live in the best of all possible moral universes.
(Footnote, I'm not going to justify these citations within this post, because it's pithier this way. I recognize this is not being fully honest and transparent, but I'd be happy to fully defend the inclusion of any these, if necessary)
It's not at all detached. People are manipulative all the time, including using themselves to reduce overall utility. Emotional blackmail is a name given to one variety of this.
It's not really a reflection on utilitarianism. That's just philosophical ethics, at least in the form that predominates in Anglo-American philosophy departments.
The game of coming up with "counterexamples" to moral theories is fun, but basically stupid. By definition it involves "contriving" cases, however realistic really, which can make whatever preposterous "stipulations" they please. The underlying assumption is that moral theories are somehow like scientific theories in that they are validated by "predicting" the available observational "data", i.e. our moral intuitions, i.e. the social values of the cultural/economic groups we're a part of. Mysteriously, christian conservative scolds engage with philosophy and end up developing something a lot like christian social conservatism, and cosmopolitan liberal scolds come up with something a lot like cosmopolitan social liberalism, despite the fact that both are engaged in this highly scientific form of inquiry. Very odd.
The whole game is also probably largely irrelevant to the kind of stuff Bentham actually cared about, since he mainly wanted to use utilitarianism to guide state policy, and (famously) hard cases make bad law.
I feel like this statement hides something critical, "Here's the thing: there is, clearly, more utility in me keeping my finger than in you keeping your measly ten pounds."
My point is that, is that so clear? Or is the utility function being presumed here lacking?
Well the ten pounds still exists either way. You'd have to argue that there's more utility in Bentham owning the £10 than the mugger owning the £10, and that the difference in utility between them is greater than the utility of a finger.
I imagine you could define utility that way, but presumably the mugger could increase the cost (two fingers? an arm?) until the argument works. Also, if you do definite a utility function like that (say, "there is more utility in this £10 being mine rather than yours than the utility of your arm") then that's a pretty questionable morality.
The mugger, through no coercion of Bentham, chooses to go down a finger. It is obvious that the mugger has an insane utility function, but it isn't obvious that Bentham letting him act it out is causing a drop in overall utility.
If the mugger doesn't want his own finger, it is Bentham can choose to trust him that 9 fingers are better than 10. Maybe the mugger is even behaving rationally, maybe the 10th finger has cancer, who knows. As the story illustrates, giving him $10 didn't stop him from losing his finger. There are many factors here that make the situation unclear.
Now imagine that instead of a mugger you have an AI researcher who both believes that AGI will destroy the world and is intent on being in the room when it is created. The self-mugging of an AI safety-ist.
Meanwhile the utility of the mugger's finger is questionable. The pain of losing the finger is the only real cost. If they are just a petty criminal, the loss of their finger will probably reduce their ability to commit crimes and prevent him from inflicting as much suffering on others as he otherwise would have. Maybe losing his finger actually increases utility.
Bentham: "I'm sorry Mr. Mugger but I am on my way to spend this 10 pounds on a supply of fever medication for the orphanage and I am afraid that if I don't procure the medicine, several children will die or suffer fever madness. So when faced with calculating the utility of this situation I must weigh your finger against the lives of these children. Good day. And if the experience of cutting your finger off makes you question your own deontological beliefs, feel free to call upon me for some tutoring on the philosophy of Act Utilitarianism."
Any other scenario and Bentham clearly isn't a true Act Utilitarian and would just tell the Mugger to shove his finger up his ass for all Bentham cares. Either strictly apply the rules or don't apply them at all.