> Sure, the people in your tribe may do bad things, but only in response to extraordinary circumstances.
Attribution Error gets doubly dangerous when combined with another cognitive bias, the Out-group Homogeneity Effect[0], which leads to the Group Attribution Error.[1]
This latter error leads people into thinking the characteristics of an individual group member are reflective of the group as a whole, so "When someone in my tribe does something bad, they're not really in my tribe (No True Scotsman / it's a false flag)" whereas "When someone in the other tribe does something bad, that proves that everyone in that tribe deserves to be punished".
I think the author is not exaggerating by suggesting that solving Attribution Error could dramatically change the outcome of human civilisation, but just identifying it as a problem isn't even half the battle. Perhaps the best we can do for now is develop some sort of polite shorthand for saying "You might be Attribution Erroring", which can be used for calling out bad news reporting, bad social media posts, and unhelpful framing of issues when people are having conversations. Then we need to train our brains to always be looking out for this problem, like a bad "code smell" when doing a code review.
I'm pretty sure the combination of those two is 90% of the "cyclists don't obey the law" meme. When a cyclist breaks the law they make the whole out-group look bad, but a driver who breaks the law is just "one bad driver."
From my time as a cyclist I can tell you that all car drivers constantly disobey the law.
In truth all groups of road users consistently disobey traffic laws, the laws worth breaking just differ from vehicle to vehicle. It's easiest to condemn those that break laws that you aren't breaking, even if you yourself are no less innocent.
Possibly, but I think someone who didn't pass a driving test is going to know driving law less well than someone who did, so there'll probably be more cyclists violating the law.
Also, a cyclist violating the law is much less dangerous than a driver doing so, the latter has over a hundred times the kinetic energy. So a cyclist might feel that those laws shouldn't apply to them.
As a cyclist, I constantly break the law, and I’m aware that I do so.
The thing is, most traffic laws are designed to keep drivers safe. They’re also designed for cars. Bicycles aren’t cars, and car drivers tend to be impatient and don’t pay attention, so we have to adjust—-which sometimes means, for example, running the red light a couple of seconds before it turns green. Or turning left on a red light, since I’m not crossing any lanes. Or treating a stop sign as a “slow” sign.
(This is a left-side driving country.)
All of that is necessary to keep my life expectancy up. As for the pedestrians, well, unlike a car I’m perfectly capable of spotting their existence. Nor is a slow-moving bike particularly dangerous.
Don't make the mistake to believe others are thinking wrongly, that their conclusions rely on errors. Such insights are valuable to correct your own thinking and you can share the insights you have gained.
Aside from attribution errors, it could just be a political play to push responsibility of the behavior of certain individuals on groups. That is a common and deliberate political strategy that happens without anyone making an error.
Eh, someone is making an error if such political play is successful, it just may not be the speaker. Populist rhetoric is all about telling people what they already want to hear, that is, exploiting and amplifying existing cognitive biases in the target audience.
Here’s the definition of populism: “a political approach that strives to appeal to ordinary people who feel that their concerns are disregarded by established elite groups.”
Does that not describe both of our political parties?
> Perhaps the best we can do for now is develop some sort of []polite[] shorthand for saying "You might be Attribution Erroring", which can be used for calling out bad news reporting, bad social media posts, and unhelpful framing of issues when people are having conversations.
FWIW, we tried this against (what is now know as) political correctness, and now I'm starting to see people actively defend political-correctness-as-such. Not saying it can't work, but it might be important to make sure it's a impolite shorthand. Other than that, though, this sounds like a good idea.
FWIW, I think there's a good chance banning the fundamental attribution error would have no good effects.
I think people mostly do it without realizing it's what they're doing. I suspect that when we notice we're doing it, we usually stop it.
And the human tendency to defensiveness probably means that if someone else tells us "you're doing this," we'll come up with all sorts of plausible reasons why we're not.
Keep in mind, too, that while fundamental attribution error is not a valid reasoning form and thus not legitimate for use in proofs, it is not guaranteed to yield wrong results. Even if you could ban it effectively, it might not actually change much, as there's always more than one way to skin a cat. People are masters of finding ways to believe what we want to believe.
I misspoke above, the conversation was much broader, encompassing not just fundamental attribution errors, but ~all errors and imperfections.
> FWIW, I think there's a good chance banning the fundamental attribution error would have no good effects.
It's possible, but it's also possible that it could have good effects - or, sometimes good effects, sometimes none, like most any other of the guidelines.
> And the human tendency to defensiveness probably means that if someone else tells us "you're doing this," we'll come up with all sorts of plausible reasons why we're not.
No doubt about it - once again, you could say the same thing about the other guidelines. My thinking is: just as the existing guidelines can be used as justification for punishment when broken, and set ~aspirations for behavior, is it possible that the list may not be perfect? Might certain other guidelines (or functionality) make HN even better? Dang's claim is that it is not possible, but this is a prediction of the future, and the future is unknown (or so people say, but only sometimes).
> but then @dang informed me that improvements to human behavior in the future are impossible because people will continue to behave the way they do indefinitely, regardless of rules.
This is surely an argument for dropping the whole guidelines then?
Anyway I agree that attribution error should be unacceptable, the real issue is that it's very hard to point out when someone is making this particular mistake vs. other kinds of mistakes.
Another funny/ironic part....one of my suggestions was:
> Some sort "cultural mode" for threads of certain kinds, which might include things like "epistemic strictness". So for example, for culture war (politics) threads, it might be interesting to set Epistemic Strictness = High, meaning certain common behaviors would be forbidden, such as asserting imaginary beliefs as facts (mind reading, future reading, etc).
So when dang wholesale dismisses the idea that improvements can be made (essentially, a claim that HN as it is today is 100% optimal), if this claim was made in a thread with "Epistemic Strictness = High", his comment would have violated: Mind Reading, Future Reading, and Omniscience (which I forgot to include in my comment). I simply cannot see how (well, actually, I know exactly how it is done) one could believe that literally disallowing certain types of comments would have zero effect, especially considering (as you say) that we already have guidelines.
Generally speaking, I find it utterly bizarre how resistant people are to the idea that we distinguish between reality and (certain types of) fantasy in our discussions.
> So when dang wholesale dismisses the idea that improvements can be made (essentially, a claim that HN as it is today is 100% optimal)
What an incredibly weird exercise in mind-reading you're doing there. Really, an extraordinary mis-extrapolation from his actual statements in that thread. The irony, it burns.
> You don't think HN users are capable of realizing when they are engaged in mind/future reading, even if it was an explicit rule? You surely know the userbase better than me, but this seems unlikely.
> If you want nothing to change, so be it, but the idea that functionality changes cannot have an effect is very frustrating.
dang:
> Correct. Not only HN users but humans in general—I just don't think we work that way. The way we work is that we have our preference on X and then we propagate that preference through every edge in our mental graph of associations from X.
> One can imagine a user saying something like "Although I personally am against nuclear power, since this is the 'optimism' thread it isn't the best place for my counterarguments so I'll post them in the 'pessimism' thread instead". However, such people are somewhere between vanishingly rare and unicorns. Statistically they may as well not exist.
I can see how you might (correctly) take exception with a strict interpretation of "wholesale" in my comment, but is it not true that he is asserting at the very least that people cannot stop engaging in mind/future reading, and that functionality changes cannot have an effect (as well as other things if you read the discussion)?
And as for mind reading: how is it that dang has obtained accurate knowledge about the maximum potential of many thousands of people, in the future, under conditions that have not been tried?
Here's a question that I've been wondering about for a while:
What happens if your fundamental attributions grow up from proximate personal narratives into a social philosophy?
Perhaps one where you understand yourself as some worthy ingroup whose shortcomings are primarily functions of imposed circumstance, with the shortcomings of outsiders primarily being functions of their character failings?
(Yes, you can see that in your political opposition! Congratulations. I won't say it's equally distributed across political spectra, since I think it's plausible that there are camps which are more widely concerned about circumstance over individual character and addressing systematic issues that contribute to circumstance. But if you can't see some form of this in the political affiliations you identify with, then you may not have truly grasped the fundamental attribution error dynamic yet.)
> What happens if your fundamental attributions grow up from proximate personal narratives into a social philosophy?
> Perhaps one where you understand yourself as some worthy ingroup whose shortcomings are primarily functions of imposed circumstance, with the shortcomings of outsiders primarily being functions of their character failings?
If other people in the ingroup you identify share this identity, then, congratulations, you are now part of a human tribe.
Reading about this concept for the first time truly expanded my ability to understand the world. Another worthy addition to "Things That Should Actually Be Taught in Schools" type lists.
However I see a missing element in the usual commentary. The error (or bias) goes both ways, yet only one is emphasised. To quote the article: "most people don’t take the power of circumstance seriously enough" (in regards to understanding other people). This bodes well with the popular notions that "We are all the same", and "Everyone can do anything if just given the chance".
Yet the flip side is also valid: Not taking into account the power of disposition when looking at one self (or one's tribe). For example, the reason I just cut in line is because I am, at least somewhat, generally a jerk. This obviously is a harder pill to swallow and perhaps more fatalistic, but perhaps just as important to increasing world (and inner) peace: I should give Others a break, not only because they might be faced with circumstances that I perhaps am not, but equally because I might actually be just as much of an a*hole as I perceive them to be.
“reason I just cut in line is because I am, at least somewhat, generally a jerk.”
The research doesn’t support this though. Behavior seems to be mostly a function of the social context, as milgrim learned. We would love to be snowflakes but we are all pretty much the same.
I think it's a credit to Lee Ross and the idea that I remember the concept but not the name, or the originator of the idea.
I think programming as a discipline is chock full of examples of attribution error. When I think back to the less experienced version of myself, or even the current version of myself on a bad day/intellectually lazy moment, or just reading programming forums for more than a few minutes, I can easily come up with examples.
And I can find plenty of examples where I commit this exact error: many, many seemingly stupid technical decisions that can be explained by the context in which they were made, and more importantly, if I were in the exact situation that spawned the decision, it's not clear that I would have chosen differently either. It's only with the benefit of hindsight that I can disagree with the decision, and the really hard, important, career-spanning question is "how would I have arrived at the right decision in the same context?"
The only remotely good answer I've found to that question is reading extensively, especially outside of my comfort zone, which of course I don't do enough of.
The fundamental attribution error is the number 1 most important lesson I learned from Psych classes growing up. Being aware of it is the easiest way to be less of an asshole to people around you. I really can't overstate the importance.
Some of the locus of control is outside of ourself in origin. Those voices in our head keep chattering for the rest of our lives.
We can choose how we respond to them to a degree, but we can't turn them off.
To what degree can we choose? That's an interesting question, since the machine that's doing the choosing is comprised of 50% of one parent's DNA, and 50% of the other's.
But we feel like we make choices all the time, don't we?
Certainly, we consult ourselves, and ask ourselves what we want to do. Sometimes the answer comes to us, and that's what we choose.
But where does that answer that comes to us come from? And how does it get into our consciousness?
Do you want chocolate, or vanilla?
Did an answer come to you then?
Where did it come from? And why was it what it was, and not something else?
I feel like referring to this as 'error' is a mistake. It's not an error, it's a bias. As stated in the article, it can cut both ways, depending on your valence towards the person you're evaluating.
Sometimes an action is a consequence of a person's character, sometimes its situational, and sometimes it's a mixture of both. What we should do is recognize the bias, and try to weaken its grip a bit. We live in a comparatively low threat environment relative to what that prior is tuned for, and so it's reasonable to tone it down.
But that isn't the same as treating character attribution as uniformly wrong, or uniformly right.
Well written. I didn't know this by name, but it's something I've noticed in my life - often the circumstances of an action, or even the circumstances of an individual explain an event far better than the individual themselves.
There was a recent post that went into a modification of Hanlon's Razor to "Never attribute to stupidity what can be explained by opportunity cost". Another example, I think of the same.
I can't help but draw a connection with another pop-science idea: growth mindset.
(I don't mean to denigrate either idea by the use of that term, it's just that you need a term to describe scientific ideas that have a following outside of their expert group.)
If you think someone has a fundamental issue, you'll never try to change it. If someone is just an asshole, you can avoid them, but not change them. Similarly if you think you're "bad at math" and that this is some kind of unchangeable quality, there's no point in trying to learn.
With both ideas, the key is there's a belief that you can alter outcomes by changing the situation.
The "fundamental attribution error" is an example of projection, a psychological phenomenon generated by a neural substrate, that humans do a lot.
Indeed, what is remarkable about human "consciousness" is firstly that so many humans spend so much of their time projecting reality instead of sensing and decoding it, and secondly, that they are unaware they do this.
We see things not as they are, but as our model predicts.
> Indeed, what is remarkable about human "consciousness" is firstly that so many humans spend so much of their time projecting reality instead of sensing and decoding it, and secondly, that they are unaware they do this. We see things not as they are, but as our model predicts.
Even more remarkable is that so many people find specific instances of this behavior to be a very big deal, often dangerous and believe "something should be done!!!"...but hardly anyone considers the ~abstract phenomenon (all instances and variations of it that occur within society) to be a big deal - it's rare to find anyone in the mainstream who even finds it interesting.
Attribution error comes up all the time on Hacker News, in lots of different ways; it's probably root cause of the modal Dan Gackle moderation comment about "notice-dislike bias".
"It is folly to increase your wisdom at the expense of your authority"
- Sir Humphrey Appleby
The problem with so many of these cognitive improvements we can make for ourselves is that they are often cognitively speaking weaknesses. In other words many kinds of wisdom actually reduce our effectiveness. I would always choose knowledge over power but for many people the important thing is to think in the way that rewards them, not in a way that increases their insight.
Is it the circumstances or the person's character? Does it even matter?
It only matters because if it's the circumstances, the person may improve. If it's because they are flawed, they won't improve. So everyone thinks they'll improve and everyone else will continue in accordance with their character.
What's interesting about evidence law is that you can't mention prior bad acts to a jury to say someone did something criminal in the past and acted in conformance with their character in the current instance. For example, it's not permitted to say the defendant in a criminal trial robbed banks in the past so he was the bank robber in the current criminal trial.
We ignore prior convictions when trying to determine whether the person actually committed the crime they're currently accused of. But if they're convicted, prior convictions weigh heavily in sentencing decisions - which is the part where "whether they will improve" is relevant.
It's good to remember when you are driving for instance you might pass thousands of cars.
If someone does something bad, that's a very small part of humanity.
If you could do that one time in one thousand, that person is you.
It leads to design. Who hasn't vandalised something. If hundreds of people walk past something drunk every night build in redundancies, it doesn't mean people are bad, if a small part of humanity in a small part of their lives can screw it up then it's a design issue.
Personally I hate nothing more than ambiguous places to line up, it creates issues that don't exist.
> These exceptions are the reason I say there is no single “fundamental” attribution error
I think the "fundamental" reports to the "attribution", rather than to the "attribution error": behavior is erroneously explained by fundamental attributions to the person, i.e. "he's a jerk". In this sense there is no debate of having a "fundamental" attribution error.
Literally next to my laptop right now is the book "Super Thinking" at a few pages past where this concept is introduced, which I just finished reading. (I had heard it before but never really had the definition burned in.)
I don't understand how this can be a "world-saving idea". I think it's another case of pop science taking things out of context and presenting them as much more than they really are.
The original paper tells us we have a general tendency to think it's disposition. It is NOT proving (and it does not pretend to) that it's always situation.
Getting to the bottom of the "true" reason somebody wronged you is not always possible. Causes are themselves effects for their cause and so on to infinity. The big bang might be true cause to everything, but I would not conclude from that that you can't judge anybody as being a jerk. Going back to one of the examples in the article "Clerics and criminals rarely face an identical or equivalent set of situational challenges". Ok, but one could argue that disposition can be a valid explanation/cause for why one is a criminal and the other a cleric in the first place.
Taking the huge amount of time and energy to understand the real cause could leave you vulnerable to keep being wronged while you're in a decision paralysis. Biases can be very useful tool to navigate life.
A very common example of self-excusing based in situation is the modern phrasing "I was just in an angry place." Rather than "I was angry" - it wasn't me or about my level of self control. It was just the place I was in.
Interestingly the Fundamental Attribution Error isn't a human fundamental. It's mostly only practiced by WEIRD people, that is those from western, educated, industrialized, rich, and democratic backgrounds. Or the sort of people in the universities psychologists work in who are the most convenient to recruit for psychology experiments.
Attribution Error gets doubly dangerous when combined with another cognitive bias, the Out-group Homogeneity Effect[0], which leads to the Group Attribution Error.[1]
This latter error leads people into thinking the characteristics of an individual group member are reflective of the group as a whole, so "When someone in my tribe does something bad, they're not really in my tribe (No True Scotsman / it's a false flag)" whereas "When someone in the other tribe does something bad, that proves that everyone in that tribe deserves to be punished".
I think the author is not exaggerating by suggesting that solving Attribution Error could dramatically change the outcome of human civilisation, but just identifying it as a problem isn't even half the battle. Perhaps the best we can do for now is develop some sort of polite shorthand for saying "You might be Attribution Erroring", which can be used for calling out bad news reporting, bad social media posts, and unhelpful framing of issues when people are having conversations. Then we need to train our brains to always be looking out for this problem, like a bad "code smell" when doing a code review.
[0] https://en.wikipedia.org/wiki/Out-group_homogeneity
[1] https://en.wikipedia.org/wiki/Group_attribution_error