I learned about these experiments at university some 30 years ago, and found this to be a very powerful lesson. Of course these experiments were unethical, but I have seen nothing since that disproves the outcome and I firmly believe that a large percentage of people will happily kill other people or commit heinous crimes when some authority tells them to and they can convince themselves it it not their responsibility.
The think I took from it was that I am always responsible for anything that results from my actions. So in my career I always tried to consider what work I do, what customers I work for, and what might come from the things I do. In a business environment that is often not easy, most people work from a mindset that anything profitable is good as long as it's not illegal.
beautifully put, and exactly what i came away with as well. and yes, it often feels like swimming against the current to actually care about the content of your work. so many parts of our work culture expect us to be completely indifferent to what we are actually doing. ive always admired this experiment for being so elegant and powerful.
My major problem with Milgram is about the "learner". As far as the participants of the experiment are told the learner is, like them a volunteer. That means the learner has just as much autonomy to end the experiment as the teacher.
So why isn't the learner calling an end to the experiment - why should this be on the teacher to call it quits? Either the learner is making a fuss, but that's just how they react to a bit of pain (think about how people can squeal getting into a cold bath voluntarily) but they are happy to continue. Alternatively they are being actively coerced into continuing - in which case the correct response for the teacher is what exactly?
Maybe try and jump the supervisor and make a bolt for the door? You are clearly trapped by some kind of sadistic serial killer and you are unlikely to make it out alive.
> So why isn't the learner calling an end to the experiment
The learner does repeatedly call for an end to the experiment. For example, here is the script for the learner when the shock supposedly reaches 150 volts: "Ugh! Experimenter! That’s all. Get me out of here. I told you I had heart trouble. My heart’s starting to bother me now. Get me out of here, please. My heart’s starting to bother me. I refuse to go on. Let me out."
See the start of Chapter 7 of Obedience to Authority for more details.
> Maybe try and jump the supervisor and make a bolt for the door?
In the baseline "voice feedback" condition, the learner is strapped into a chair. He can't bolt. I think that the same is true for all other conditions, too.
I'm saying that the correct response for the teacher is to try and overpower the researchers and escape.
If the learner is strapped to a chair and being totally ignored then why should the "teacher" assume they themselves have the power to end this experiment?
I see. Milgram's answer is: for the "teacher," there is no need to overpower the researcher (Milgram called him the "experimenter") in order to end the experiment. The teacher can just walk away. (And then, if he sees fit, he can call the police or otherwise report the experimenter.)
Milgram was at pains to construct a situation in which there wasn't even an implied threat from the experimenter. When teachers protest that they don't want to continue, the harshest thing that the experimenter says is "You have no choice; you must go on." Milgram wanted to see how obedient people might be when there wasn't even an implied threat of force. (And he found that 65% of subjects were willing to deliver the maximum shock under these conditions.)
There is a clear implied threat: the threat is that you will be removed and someone else will be brought in to finish the job. The calculus is actually at what point you will use force to stop the experimenter from inflicting further harm, because the assumption is that your actions cannot stop the experimenter from inflicting harm without yourself harming the experimenter.
> There is a clear implied threat: the threat is that you [the "teacher" in the experiment] will be removed and someone else will be brought in to finish the job.
The claim made above, by others, seems to be that the experiment involved an implicit threat against the "teacher" that might explain why he followed instructions to deliver electric shocks to the "learner." How is being removed from the experiment an "implied threat" against the "teacher" that would get him to follow instructions? And if it isn't a threat of that sort, how is it relevant to understanding why "teachers" followed instructions?
> the assumption [that is, an implicit premise of the experiment] is that your actions cannot stop the experimenter from inflicting harm without yourself harming the experimenter
That doesn't sound right. The teacher can put a temporary stop to the experiment by refusing to obey -- that is, by refusing to shock the learner. He can put a permanent stop to it by, say, calling the police.
Interesting take. I have a working thesis that if a person believes something will happen "inevitably" then that person will have a greater disposition to do horrific things in the name of "inevitability". Zizek has pointed out that the belief in the inevitability of global communism allowed hard core communists to commit atrocities. Communism worked through them, they believed. Capitalists do something similar when they help destroy the environment or help exploit workers. They believe that if they don't do it then someone else will.
Here's my problem with it: when the 'Experimenter' appealed to the necessity of science, compliance rates where highest. When the Experimenter appealed to pure authority and gave direct commands, compliance rates were low.
The experiments were performed in a proud university town. What Milgram actually demonstrated is that those people trust scientists and believe in the value of science. In other words, they obeyed commands when those commands seemed to align with their preexisting worldview and ideology.
The inspiration of these experiments was Adolf Eichmann's plea that he was merely following orders. Milgram obstensibly set out to disprove that, but is said to have inadvertently proved Eichmann's plea plausible; if Milgram showed that people do follow orders, then maybe Eichmann's innocence plea was plausible. But we know Eichmann was not simply following orders. There is ample evidence that he was enthusiastic about the Holocaust and went above and beyond what was required of him. Eichmann may have been following orders, but they were orders he was ideologically aligned with. Just like the Milgram subjects were ideologically aligned with the value of science.
Many variations on the Milgram experiment were conducted. When the Experimenter didn't dress like a scientist, compliance was lower. I didn't hear this until years after I was told about the experiments in my undergrad psychology class. When I learned this, I felt deceived. It turns the mainstream conclusion on it's head.
If we were to run a modern experiment more applicable to Eichmann's 'following orders' plea, the Experimenter would be dressed like a cop and the Learner would be a black man. How many people (except the overtly racist) would follow orders in this scenario, when the orders were so flagrantly in violation of their personal values? Virtually none I assert. That's the obvious water is wet answer, while Milgram's answer is the shockingly unintuitive headline grabbing conclusion. Milgram's conclusion asks us to believe there is an Adolf Eichmann lurking inside all of us, waiting for heinous orders to execute. That just doesn't jive with what we actually know about Nazis.
"when the 'Experimenter' appealed to the necessity of science, compliance rates where highest. When the Experimenter appealed to pure authority and gave direct commands, compliance rates were low."
This is maybe the most important detail to me which seems to be overlooked.
Often people fail to recognize that there is such a thing as legitimacy of authority, and that's a real thing. No one of us is an expert in any field, it's hard to make heads or tail of the nuanced issues in all of these arenas. We often have to trust the legitimate authority. Which makes it all that much harder when a) maybe doing some action appeals to a darker aspect of human nature or b) the authorities are corrupt or c) the authority is maligned by some other failures, such as bad information.
The answer should be consent and the recognition that everyone is the highest authority over their own body. I may choose to deffer to an expert on decisions about myself but no amount of expertise or potential benefit to others will convince me to do something against another person without their consent. There need to be a deontological veto to any consequentialist analysis when the proposed action is coercive.
> "when the 'Experimenter' appealed to the necessity of science, compliance rates where highest. When the Experimenter appealed to pure authority and gave direct commands, compliance rates were low."
I don't recall any experimental variations like this in Milgram's Obedience to Authority. That is, I don't recall an "appeal to the necessity of science" variation or a "direct command" variation. Do you have a page number or some other citation?
If you're willing to buy the narrative that the experiment is "necessary for science" without questioning it, in the moment, then it's not an independent issue. They are obediently believing the authority figure without questioning the morality.
No, if we were to do the experiment in modern terms, you would place an actor in the chair and have a professor instruct their student that they are experimenting with correcting racist behavior out of the subject in a humane way. Would a liberal student reject this sort of experiment in 2020?
The Nazis were simply Germans. They were just like you and me. Many of their grandkids and great grandkids are still around. They're not any different.
Humans are dangerous when we start thinking in groups instead of as individuals. When we see ourselves as part of one group but see another group as particularly heinous, we become quite tribal and primitive.
Nazis like Adolf Eichmann were simply germans... who actually believed in what they were doing. Those who didn't but followed orders anyway were probably doing so out of fear, not simply because humans are order-following rubes.
> No, if we were to do the experiment in modern terms, you would place an actor in the chair and have a professor instruct their student that they are experimenting with correcting racist behavior out of the subject in a humane way. Would a liberal student reject this sort of experiment in 2020?
If you think a liberal student of 2020 would follow orders instructing them to torture a racist, but would reject an order that wasn't accompanied by a justification tailored to their ideology, then it sounds like you agree with my take. Given a free choice (e.g. no threat to kill their family if they refuse) people will follow orders if you convince them of the necessity of those orders by appealing to their ideologies and biases, but will refuse orders if the explanations for those orders are incompatible with their ideologies. But this is the 'water is wet' conclusion, not the attention grabbing conclusion Milgram sold to the public.
It wasn't simply out of fear. Sure, that may have motivated some people as the Nazis grew in power, but the phenomenon of humans viewing their outgroup as less than human is not unique to 1940s Germany. It has played out over and over in human history. Take the Aztecs, for example, who built pyramids for human sacrifice of lower tribal members. Or the Romans and their coloseums where they'd toss captured slaves in to fight to their death. It's really seen throughout human history and people do it with delight, even, if they are convinced that the others deserve it or that the outgroup is bad enough to warrant it.
Yes, humans do follow their biases and those tendencies are exploitable by sociopaths in positions of authority. That's the scary bit, to me, and that's what the experiment shows.
> Sure, that may have motivated some people as the Nazis grew in power, but the phenomenon of humans viewing their outgroup as less than human is not unique to 1940s Germany.
It is not that few people were motivated by fear in Germany. The nazi oppression apparatus was large and violent long before the war. People feared a lot, many of them, pretty much including those who also somewhat agreed with ideology.
Once Hitler got power, the first concentration camps were opened - for political opposition. And they were horrible places. The whole point was to destroy people and intimidate them. There was also large internal security apparatus in place to catch people who said something anti-nazi in random social setting. People reporting what others said and structures to investigate those repots. If someone was bullying a Jew in the street and you would try to help him/her, you would be targeted by literal violence too.
The level of control and pressure was large on non Jewish Germans. You could not have social or sports clubs unaffiliated with party, there was little safe space where you could say what you think. The nazi project took a lot of internal violence and intimidation to happen.
Also, they did their best to indoctrinate kids in schools to their ideology. Growing up generation of soldiers was focus of school system. Nazi ideology also taught empathy as feminine weakness, ruthlessness as virtue. All of that combines into values system where even if you are not watched, you still don't want to show empathy.
---------------------------
Afaik, keeping up slavery takes considerable level of violence too. It did not happened outside of culture where you teach kids that what is happening to slaves is actually ok. Roman thinkers spend effort to explain why slavery is necessary and good thing, why slaves are naturally slaves.
Massive violent events don't just happen out of nowhere from nothing where originally pure unideological people are suddenly confronted with innocent choice.
Seems to me that conclusion from Milgram experiment are usually overblown. I would guess that 'teachers' just assume safety of the procedure and consent of 'learners' based on context. So it is not really about obedience vs moral imperatives but more about trust of authority vs. doubts based on personal observation.
But that is precisely the point. Trust in authority vs direct observation, and the shifting of the Overton window to create reduced trust in direct observations vs authoritative direction.
It has been repeated under a bunch of different circumstances, including with real dogs being shocked. Sidestepping conclusions because of hypothetical escape hatches in the original methodology isn't a great idea.
Regardless of the scientific value of the experiment, the original documentary might be interesting, because of the experiment's influence apparent in textbooks and pop culture: https://www.youtube.com/watch?v=rdrKCilEhC0
If Milgram's experimental method were discredited or debunked, should we conclude that, in fact, the findings were that the average person will revolt against an unjust authority, and therefore we should interpret a lack of pervasive popular revolt as a sign of justness and legitimacy of the authorities?
"Nobody else seems to have a problem with it, it must be just you. Help is available for people struggling with mental health issues and change is sometimes uncomfortable, especially for people accustomed to privilege, maybe it's time for some self-care, citizen..."
> If Milgram's experimental method were discredited or debunked, should we conclude that
If his methods are discredited, you can't really conclude anything from it. The fact that he didn't prove one thing doesn't by default prove the opposite, it just means he didn't prove the original thing.
> If Milgram's experimental method were discredited or debunked, should we conclude that, in fact, the findings were that the average person will revolt against an unjust authority, and therefore we should interpret a lack of pervasive popular revolt as a sign of justness and legitimacy of the authorities?
No. Milgram's experiment being discredited means that we should revert back to the "we don't know what average person does".
If you want to conclude that "the average person will revolt against an unjust authority" you will need to make multiple valid experiments to confirm that first. And you should not forget that peoples behavior is strongly influenced by cultures they are in, so you will also need to account for that.
> therefore we should interpret a lack of pervasive popular revolt as a sign of justness and legitimacy of the authorities
Slavery used to be considered just and legitimate. Slavery is not considered just and legitimate. "justness and legitimacy" are not scientific terms.
“should we conclude that, in fact, the findings were that the average person will revolt against an unjust authority,”
History has repeatedly shown that a significant number of people will actively play along and almost nobody will revolt. Otherwise the Nazi war crimes and Holocaust, My Lai, the purges and gulags under Stalin or the Khmer Rouge wouldn’t have happened. All these needed significant participation from a lot of average people. History also shows that no culture (as in religion or a certain part of the world) is immune from that.
> All these needed significant participation from a lot of average people.
Yes, but in all those cases, average people were someone who already went though a lot of indoctrination/propaganda. And average person was also under considerable danger he she/he defects. Average person both believed a lot of ideology and also feared for himself/herself/family. The general leaders of those were powerful players who actively believed their ideologies.
The purges and gulags under Stalin were very from-top organized. They did not just impulsively went into it without planning. Instead, they made sure you have no realistic way to stop anything and that you do believe victims are the bad ones in the first place.
That being said, Holocaust itself happened mostly in occupied territories when the German army had pretty much perfect control over the place. The average person was not much of a player, the military organized and backed it basically.
0.) Just like people who upholded slavery in Americal or colonialozed, they were people of their time. Many people did believed in what they did when they sent people to death. They belied they have done right thing.
1.) Purges specifically had quotas for areas. If you refused to cooperate, you and your familly was purged. Specifically purges gave little good choice to participants. You literally had to participate or die.
Wish to stay alive was frequent motivating factor. I will quote myself: average people were someone who already went though a lot of indoctrination/propaganda. And average person was also under considerable danger he she/he defects. Average person both believed a lot of ideology and also feared for himself/herself/family. The general leaders of those were powerful players who actively believed their ideologies.
3.) Your mythical average person outside of system and culture does not exists. "The average person" is living inside totalitarian system where opposing the system meant severe retaliation for years. It also means that if you oppose system, you are isolated and can't organize opposition.
Nazi spend years trying to teach people, average Germans, that empathy toward emnemy is weakness. That manly men fight, are ruthless and kill enemies without empathy. The cruelty you see in WWII is also felt of pre exiasting belief in those Germans that what they do is right.
The people of that time did not shared your values.
' She found that whilst Milgram's originally published article mentioned some forty participants, of which some twenty-six proved to be obedient, some seven hundred naive participants were actually "tested" in various experimental scenarios with varying results as to their "obedience". '
without any mention of the issues regarding the inability of anyone to reproduce the Milgram experiments and get the same results with any consistency, the whole example of "Obedience to Authority" as it relates to this sanctioned abuse of power should be thrown out.
there isn't much left to learn from the milgram saga, other than that the consumer public absolutely loves pop-culture science.
You are wrong about inability to reproduce. At least one recent replication was done in Poland (2015) by a respected academic team incl. professor Dolinski. The main result was consistent with what Milgram observed: most people did what they were told.
My understanding is that there have also been semi-recent replications done in France, Iran, and a partial replication conducted at Santa Clara university. All found the same basic pattern as the original study.
Yes. There have been dozens of replication efforts -- none of them exact replications. The closest and most careful one that I know about is the one done by Jerry Burger at Santa Clara University: "Replicating Milgram: Would People Still Obey Today?" It's at https://www.apa.org/pubs/journals/releases/amp-64-1-1.pdf.
I wonder the impact of how widespread the knowledge of the original Milgram experiments is. If the subjects figure that the pain is fake because they've heard about this experiment before they might be more willing to administer the shock.
> I've heard that these kind of experiments are impossible to do today, due to "ethics" rules in science.
If you're referring to the fact that the experimenters had to lie to the subjects and deceive them, not just in one individual statement but over the whole experimental scenario, the reason why that's not a good idea is actual practical, not ethical: constructing a false scenario and properly controlling what the subjects believe in a false scenario is very hard.
For example, consider this post by The Last Psychiatrist:
"He didn't stop because of moral courage; he stopped because he thought he was being played."
In other words, he figured out that it was a false scenario and his refusal was based on that, not on any moral objection to shocking people. But he got counted as a "refusal". How many other people who didn't refuse had some intuitive sense that the whole thing was a setup, thereby dampening whatever moral impulses they had? There's no way of knowing. The fact that it was a setup changes the whole experiment, as compared to one where all of the information being given to the subjects is truthful.
That's right. Universities' "institutional review boards" (IRBs) don't permit scholars to tell subjects that they are delivering very heavy shocks. Jerry Burger of Santa Clara University took this point up in his replication article (which was necessarily only a partial replication).
The think I took from it was that I am always responsible for anything that results from my actions. So in my career I always tried to consider what work I do, what customers I work for, and what might come from the things I do. In a business environment that is often not easy, most people work from a mindset that anything profitable is good as long as it's not illegal.