All 10 questions were variants of the trolley problem. The problems were also poorly formed which is what made answering them difficult, not the actual moral dilema. For example, in the real trolly problem it is clear that you really only have two options flip the switch or not, it is hard to imagine there are other alternatives. But in many of these problems, it is hard to believe there is not a better option available, even though the problem states "but the only way..."
Yes, they were quite poorly formed, which renders the study entirely pointless. The idea behind these problems is to collect people's intuitive reactions to simple, clear situations. If the problems are so poorly formed that intuition simply rejects them, the exercise is useless, except as a study in human frustration. (Perhaps that's the point? It would explain the poor UI design.)
Several situations present you with the option of causing one immediate and fairly certain death to prevent several more distant, much more speculative deaths. You can't predict the path of a boulder down a mountain. A boulder that would be stopped by one person would be unlikely to kill five. The man driving the injured people to the hospital actually has little idea how quickly he needs to get them there to save their lives. The certainty presented in the problems is so unreal that they might as well have been posed in an entirely abstract way in the first place.
Yeah, I stopped when it asked me to choose the number of people that would be intentionally infected with HIV that would obligate the doctor to poison the patient/murderer. There were just so many holes (break doctor/patient confidentiality, how would the doctor know how many people would get infected/etc)
I would actually have to disagree. They are different situations cast around the trolley problem. For me, the settings of each situation actually played a large amount in the weighting. Trying to not reveal much, in my opinion, the lunch line guy wasn't wrong at all while the box lady was, even though the situations are virtually identical (sacrifice one to save many).
This test uses a variant of the good old "trolley dilemma" (http://en.wikipedia.org/wiki/Trolley_problem) where a trolley/train is headed for five workmen on a track and it asks if you'd pull a lever to divert the trolley onto a track where only one man is working. (This test then goes on to vary that situation somewhat.)
Studies have been run on this in the past and to my dismay most people would, in the initial scenario, flick the switch to save five but kill one - immediately becoming murderers rather than bystanders. I suspect this test is trying to weed out what could make people flip-flop from one point of view to the other, as when replaced with "pushing a fat man off a bridge to block the trolley" the stats have tended to swing the other way.
Involvement in the world isn't voluntary; actions have consequences, some people prefer some consequences to other consequences.
In your case you value maintaining an unsullied self-image over assisting 5 people in mortal peril, and would prefer if other people saw your choice the same way.
But you're right (I've met people who were familiar with the people involved here): the intuition is that the stronger the perceived interpersonal relationship between you and the unlucky bloke the less likely you are to flick the switch, but teasing out exactly how much relationship is needed to make the moral intuition flip is tricky business.
In your case you value maintaining an unsullied self-image over assisting 5 people in mortal peril, and would prefer if other people saw your choice the same way.
"Maintaining an unsullied self-image" is largely what personal moral codes are about. If I actively killed someone, I would find being a murderer harder to cope with than being a witness to the death of a larger number of people.
I think this is a purely moral standpoint, whereas "killing one to save the five" is the result of people playing a "numbers game." I think this is demonstrated by the opposing results of the "push the fat man off the bridge" dilemma - once you change the mechanism from a lever/switch to actually pushing a dude off a bridge, people's sense of morality comes rushing back.
I would say moral codes are about trying to use your agency to do good, not bad; there's no obvious reason why "doing good" would always and everywhere equate with "easy to cope with".
You claim you're not doing a #s game but I think I can prove that you are, even if you're not yet aware of it.
Consider this alternative scenario: the trolley is hurtling towards the 5 men as before and you've got the option of throwing a switch; what's new is that this time the switch moves the train into an empty railyard, saving the five lives at no one's expense.
The #s game is removed and it's just a case of: do you think yourself morally obligated to throw the switch and save the lives, or not?
This reminds me of a philosophy class I had in college, where we discussed action vs inaction, and the responsibilities of each choice.
I'll skip the lengthy description of all the views, but the short version is that there do exist very compelling arguments that if you choose inaction, you are just as responsible for that outcome as you would be if you chose action.
As applied to the trolly problem, the point is that the "take action" vs "do not take action" is a false dilemma. Choosing not to act does not absolve you of responsibility or remove you from the problem. You may have feelings otherwise, but it is a rational argument.
to my dismay: You are (in some woolly sense) five times more likely to end up as one of the five in a situation like this, than as the one. It is therefore better for you on the whole -- and, symmetrically, better for everyone -- for people in general to choose to switch the switch and kill one person rather than five. It's nice that you value their moral purity ("murderers rather than bystanders"), but I would prefer to live in a world where people tend to do what produces most net benefit rather than one where people tend to safeguard their moral purity, so I remain undismayed. (I'd be dismayed if I thought that "most people" aren't at all troubled by the prospect of being put in such a situation, but that's a separate issue from what they'd do once in it.)
Then again, I think much more harm results from people not living up to either sort of moral standard than from people having suboptimal moral principles (whatever that actually means).
It is therefore better for you on the whole -- and, symmetrically, better for everyone -- for people in general to choose to switch the switch and kill one person rather than five.
As a math problem, sure - but lives do not all have equivalent value. I view this as a moral problem, not one of statistics or utilitarianism.
I would prefer to live in a world where people tend to do what produces most net benefit rather than one where people tend to safeguard their moral purity, so I remain undismayed.
In most situations it is hard (or even impossible) to calculate the benefits or costs of different actions. This is why people argue strongly for and against things like the abortion debate. Unless you treat people like countable objects it is hard to establish if killing one person is better than letting five others perish.
With this sort of thinking, of course, there are no definite answers - so I take an objectivist standpoint that people should use their personal moral codes and self-interest to make decisions - rather than "running the numbers."
people should use their personal moral codes and self-interest to make decisions - rather than "running the numbers."
You don't consider the number of people involved to be significant? What if instead of saving five people it was some other number 5 > n > 1,000,000,000. Do the numbers still not matter?
The numbers matter. The issue here is this: everybody isn't equal, we're all unique. Some people work hard, are more intelligent, and contribute more to society. Some people are of higher value, depending on your metrics. So a murderer (who takes from society) has less worth than a scientist whose research creates new technologies that aid society. Everyone should have equal 'rights', but everyone is not equal however.
The problem here is that you cannot know the value of this one individual vs the five. People aren't a set of bricks, all manufactured to be the same size and from the same material. Saving five people vs one is a guess -- you can be wrong and save a group of people who contribute nothing and kill the man who does. But the numbers still apply because one man cannot be worth more than thousands for example -- and I think part of this experiment was to find out what your threshold for this was.
Relying on 'personal moral codes' without the help of math is a good way to form completely incoherent judgements. I wonder what sort of personal moral code you have that favors the death of five over the death of one. Saying its a 'moral' problem rather than a 'math' problem doesn't explain anything.
"Child in the bunker" question looked quite different (from all others), and not as a variant of trolley problem. Actually, it was the only problem where I thought it's not just permittable but even required to kill it in order to save the others.
I've learned about trolley dilemma few days ago on "Justice with Michael Sander" lectures, Harvard's OCW equivalent program to publish their materials online.
I took Sandel's course a few years ago at Harvard and most of these scenarios were very familiar. Philosophy/psych undergrads conduct surveys on these scenarios constantly.
The thing that stuck out most to me about this project was that some of the questions made a point of including superfluous numerical details (e.g. "thirty-three infantry divisions" or "Forty-second Street"). Maybe they're seeing whether priming a subject with these numerical details modifies how we weigh moral alternatives.
These tests just annoy the crap out of me. I put up with the first few questions because I wanted to see if they would do something novel. But no, they had all the standard problems of morality tests:
1) False dichotomy. (There are exactly two actions you can take, no more.)
2) Unrealistic foreknowledge. (These are exactly what the results of these actions will be.)
3) Unrealistic scenarios. (How many of us are ever going to be standing with our foot stuck in the tracks of the sideline right near the switch when five other people are...blah blah blah.)
This kind of test is exactly what gives philosophers a bad reputation. They are studying important issues; could they /please/ take the time to build a test that respects the intelligence of the testee?
These are simplifying assumptions; it's rather like a scientific experiment that aims to control for other factors, in order to distill the experiment down to its essential core.
Andrew has been kayaking and is six miles from the nearest town. He hears on the WBZ4 radio station that the damn has broken upstream and that the river is about to flood.
Spelling and grammar mistakes reduce your credibility.
Interesting, my radio station was not WBZ4, it was WBZ60. This corresponds well with the observation in other comments that there was some numerical priming going on that might have been the real point of the study and not the moral part of it.
I have no patience for these contrived dilemmas that few people will ever encounter. When humanity is done making war with one another then maybe we could ask these questions. But until then it is like treating someone's skin rash and ignoring that they are in cardiac arrest.
Why do you think there are wars? It has to do with the moral standing and beliefs of the leaders who wage those wars. These types of tests are actually very appropriate for "treating" the problem of war. One life for one more important one, or one life for many, or a few lives for many are the foundations of war. Is the study and the results a fix for these things? No. But they can give a much greater understanding of the decisions people are willing to make, allowing for better judgement of reactions ot situations.