Hacker News new | past | comments | ask | show | jobs | submit login
Loss aversion is not supported by the evidence (scientificamerican.com)
248 points by onuralp on Aug 5, 2018 | hide | past | favorite | 111 comments



This article is essentially a press release for the author's own paper: https://onlinelibrary.wiley.com/doi/abs/10.1002/jcpy.1047

Which itself is a part of a series of articles in JCP debating the issue: https://onlinelibrary.wiley.com/doi/abs/10.1002/jcpy.1054

The definitive statement made by this article's headline isn't really supported by the evidence presented in the papers. Rather, the state of affairs seems to be that "loss aversion" has been the victim of incessant overgeneralisation. It's a very simple hypothesis about human behaviour that plays nicely into a lot of interesting (and therefore publishable) narratives. This has lead people to blindly accept the general hypothesis of loss aversion without enough critical investigation of its manifestation. The authors don't really refute "loss aversion" (i.e. they don't present an alternative theory to explain the papers that purport to demonstrate "loss aversion"), but rather they refute the pop-psychology belief that it's a general principle of human behaviour.


That's a great summation. It seems as though there's confusion as to what constitutes loss aversion. IIRC, the original paper by Kahneman, Knetsch, and Thaler [0] talked about losing something you had. Meanwhile, the posted argument talks about whether someone is more or less likely to buy something if the price goes up or down. These are such different situations! The first is losing something you have, the second is deciding whether you want to trade some money for a thing.

[0] https://www.aeaweb.org/articles?id=10.1257/jep.5.1.193


“The price will rise, but now” is the most tenuous loss aversion I’ve ever heard of.

“You have a $10 credit, it expires in 2 days” would test loss aversion.

That consumers behave rationally in the face of price rises is an interesting finding. But it’s a far cry from testing loss aversion.


"Loss aversion" as a cognitive bias is not just the desire to avert any loss.

If it really is a general cognitive bias, it will show up as a difference from expected statistics.

Imagine a held asset that has an even chance of going up or down. You'd expect to see about half of people sell it and half hold it. A cognitive bias would alter that ratio. If 75% of people sold it, and only 25% held it (despite even odds), you could say that there appears to be a bias at work.

A great example of cognitive bias at work in the real world is the Monty Hall 3 door riddle. Most people get this wrong even though the math is not hard.

But simply avoiding a predicted loss is not "loss aversion" as a cognitive bias. It's not even a bias at all; it's rational to avoid loss.



I was under the impression the endowment effect and loss aversion were synonyms.

Is there a meaningful difference when these are used as psychological terms of art (as opposed to pop psych just-so explanations)?


How well does that [the original] replicate? Is there non-self-reported data for it?


Are you reading the same article as me? As they mentioned:

> Loss aversion has been represented as a fundamental principle. Loss aversion is not understood as the idea that losses can or sometimes loom larger than gains, but that losses inherently, perhaps inescapably, outweigh gains. For example, Kahneman, Knetch, and Thaler (1990, p. 1326) describe loss aversion as "the generalization that losses are weighted substantially more than objectively commensurate gains." In a similar fashion, other researchers do not qualify the idea of loss aversion; Tversky and Kahneman (1986, p. S255) state that "the response to losses is more extreme than the response to gains;" and Kahneman and Tversky (1984, p. 342) state "the value function is … considerably steeper for losses than for gains."

The authors are refuting "loss aversion" as Kahneman et al is describing it.

> (i.e. they don't present an alternative theory to explain the papers that purport to demonstrate "loss aversion")

Why do they need to? The paper isn't about trying to explain when losses or gains are most impactful; the paper is about whether or not there's a clear tendency.


Big words don't count as evidence. The article brings absolutely no new information to the table. Heck, I think this article's a clickbait.

The basic principle behind loss aversion is simple.

What's the primary motive behind an action - Running away or running towards? Prevention or gain.

For instance. Yesterday an article about American child care was on HN.

American parents are acting primarily to PREVENT injury, discomfort or death of their children. That's action motivated by loss aversion.

Japanese, maya parents still want safety for their children but independence of their kids is a primary motivator for their action. In other words gain.

I think the author is confused about something. I want more money and I don't want to lose the ones I have. Both feelings aren't mutually exclusive. However at the point of decision I could be swayed more by greed or by fear.

If a site, seller or investment is shady, fear wins. I'll protect myself. If not, greed or gain could win in that instance.

I could speed down towards a party one moment. And a near miss could make me reconsider and slow down. Both modes occurred on the same journey. No grammar by some clickbaity author would change that.


To me, loss aversion was amply illustrated by a "King of Cars" episode, a reality show at a car dealership. The manager would hand out $100 bills to the salesmen in the morning, with the proviso that if they sold a car that day, they got to keep the C note on top of their commission.

He'd found they worked much harder to retain the note once it was in their hands, than if he offered a bonus of $100 at the end of the day.


That's a good example, but there's many factors at play here. For instance:

- Loss aversion

- Trust (i.e. the manager believes in you): When we hear the word "bonus" we often think "that's something that happens 10% of the days". However, when the manager is giving you the money at the beginning of the day they're saying "I think you can do this today. I might as well give it you already." The manager very clearly shows that they believe in you, and they probably know what they're doing.

- The prize is visible: We know from many examples that humans become more motivated when they can physically see their prize. One part of this trick is that you have the note in your pocket. Maybe you even take it out a few times during the day.

There's a few ways to test what factor is most important. For instance, you would expect the trust-factor to fade over time because you'll realize that the manager gives you the note regardless of their faith/belief in that you can make it (there's nothing special about "this day" or "this employee"). You could also replace the $100 note with a more neutral coupon that says "$100 bonus". This makes the prize less visible, but we should still value it as $100. Or maybe there's a checkbox on a sheet inside the office which says "Tick off if bonus not reached". If the effect goes away, then the visibility-factor is stronger than the loss-aversion-factor.

This is my main beef with the pop culture around "loss aversion" (and other psychological terms): There's so many interesting things to discuss around it, but we so badly want to combine everything into one simple buzz word.


> The article brings absolutely no new information to the table.

Did you read the paper? It's not a paper that "brings new information to the table", it's a paper which presents recent experiments and tries to show that there is little scientific evidence of loss aversion.

> The basic principle behind loss aversion is simple.

Huh? I don't understand what you're saying? You're saying that "loss aversion" is simple, but then you give examples of losses not being universally more impactful than gains? What you're describing here is exactly the point of the authors: Both modes are important

> … swayed more by greed or by fear.

And you should be aware that "loss aversion" is very careful to not talk about the psychological process behind. Loss aversion is not about greed, fear or any feeling and/or instinct. Loss aversion is a measurable effect. None of the papers that claim that loss aversion is a general principle claims that "greed is stronger than love" or anything similar to that. In fact, they are very "chicken" and just shrug it away.


> Did you read the paper? … Huh? … you should be aware that …

I have no skin in this one but I would like to call this out: these comments make an argument combative. It pushes people up a tree and makes it hard to focus on the facts. Imagine user vezycash actually was swayed by your argument; how easy would it be for them to say, hey, you’re right? Pretty hard after all those comments, because it ties in their pride with their viewpoints and makes changing their point of view humiliating rather than enlightening. The conversation is now a battle, and admitting fault is losing face.

I’m calling this out now but by no means is it specific to you; it happens all the time. My request to anyone here is: please leave all those phrases out. “You should...”, “did you even...” etc. The argument works just as well without them. It makes it much easier for someone to say, hey, I guess you’re right! And isn’t that what we all want, in the end? ;)

Thanks.


Yeah, I see how this turns an argument combative and that wasn't my intention. I don't think there's anything wrong with vezycash's point of view, in fact I completely agree with most/all of his points :-)

There is something to be said about "Did you read the paper?" though. We have here an article where an author has published a rather large article (59 pages) and done a substantial amount of research (quoting over 80 other published papers). I don't expect everyone to read all of that, but I wish people were more upfront about whether they're talking generally about the topic or discussing the actual story.

Like, I honestly wonder "Did you read the paper?" not because I expect everyone to read the paper, but because it means we can have a more constructive discussion. If you haven't read the paper and is confused about what the author means then I can try to find quotations that better explain the author's opinion. Or maybe we can discuss the general topic (ignoring the story).


Agreed, especially when their point is that something is self-evident in the comments section of an article that specifically goes into great length to argue otherwise.


Could you post a link to the 59-page article you are referring to? Thanks


Just to back this up, 'don't imply that someone didn't read the article' is actually in the HN guidelines.


In this case, however, the person asked if the poster had read the underlying scientific paper - which is not the same as the linked popular press article. It's a legit question to help frame the discussion, though with tone issues that suggest a gentler way of asking would be helpful.


This comment should be printed out and hung in Times Square.


As complex as gravity and electricity are, the underlying principles are simple. Same with loss aversion. Money isn't the only or biggest motivator.

Take a good common example of loss aversion - admitting being wrong. Why do people find it difficult to admit that they are wrong?

What's at stake here? Reputation, respect, pride, even money.

100 scientists vs Einstein is a classic example of this. Pointless wars have been been fought because someone wouldn't admit being wrong. The Iraqi, Vietnam wars are good examples.

Your reaction to this issue is another.

Limiting loss aversion to just economic behavior betrays lack of understanding of the topic.


The arguments you made don't challenge what the article says one bit.

The loss aversion hypothesis is the hypothesis that given the choice between either of the following two scenarios:

  - Having an object x and then risk losing it.
  - Being offered an object x but risk not getting it.
people are more "motivated" by the first than by the second. The claim moreover is that:

  - This is a universal motivator, which means it must explain "economic behavior" (which you mention) as well as anything
 else. The fact that -- as the article says -- people prefer *keeping* a stock which is just as likely to lose in value as
 to gain, is a *perfect* example to illustrate that it is *not* a universal motivator.
  - That it is not rational. There are cases where losing something, like for instance money, is *truly* more damaging
 than gaining the equivalent amount of money. For example, if I lost $100,000 it would be much more devastating than if I
 gained $100,000 -- in this scenario, it's not a psychological *bias* but in fact a completely rational belief. This example
 is in the article. You can not use examples like this one to argue in favour of "loss aversion", because there would be no
 evidence of an irrational bias.
Also, the fact that you keep applying the "loss aversion" hypothesis as broadly as possible, to things like wars and arguments, suggests you're assuming it applies everywhere. What about the stock example, which is mentioned in the article? That ought to prove you wrong, no?


> What's at stake here? Reputation, respect, pride, even money.

Ego. People primarily lie to themselves, in order to retain the coherent (constructed) reality/continuity of their life. (c.f. cognitive dissonance)


Loss aversion specifically refers to "people's tendency to prefer avoiding losses to acquiring equivalent gains". It's not just the general idea that people are averse to losses, it's about how people value gains and losses differently.


Thanks for the phrase "incessant overgeneralization" -- I didn't even realize I was looking for that. It seems that this is something the social sciences are inherently at risk of, given how closely the topics are to our everday lives.


I’d say it’s true of most knowledge, regardless of domain. AI seems like a great candidate in computer science, for example.


In the hard sciences, for example, the problems that people work on are often less complex and more removed from everyday human life. I think this causes for much less over-generalization to occur.


Oh god yes! I see it often in our perceptions of "the other" as well. Other cities, other societies, other places....


Over-generalization is common beyond the social sciences as well. It is often found in the basis given for false dichotomies, which are not uncommon in discussions of how to write software. It is also apparent in many of the claims made by clickbait titles.


Their argument reminds me of climate change “skeptics” arguments that there is some sort of institutional bias towards papers that support climate change as a theory, which therefore is why all the papers and all the evidence support the climate change theory.

Is there a term for this kind of logical fallacy? It’s almost in ad hominem argument against an entire group


It's not a logical fallacy, though. There is just no evidence for it, and we understand intuitively how far-fetched it is. But it's certainly _possible_ that institutional bias explains those results. It just happens not to be the case.

If we're looking for a general logical fallacy, it might be something like, "Using the mere fact of theoretical possibility as a way to justify unlikely beliefs, or as a counter-argument to strong evidence." I'd love to know if there's a term for that. It comes up everywhere.


Did not even have to go to check the sources:

> And people are not particularly likely to sell a stock they believe has even odds of going up or down in price (in fact, in one study I performed, over 80 percent of participants said they would hold on to it).

He refuted himself right there, in the article.


That's the problem with any of these social theories. Losing what? Gaining what? We're lacking a clear definitions of terms. I know it's a trope, but it really is just so unscientific.


Note: What people say they'll do and what they end up doing are usually different.

Actual investor behavior is sell winning investment, and hold on to losing investment until loses triple. And sell for significant loss.

Recent Tesla short sellers come to mind.


> Did not even have to go to check the sources

Yeah, it would be terrible if you read through 59 pages of well-cited, well-explained text and tried to understand what the author is saying. Might as well judge everything from one sentence in an online article written for popular audiences.

> He refuted himself right there, in the article.

That example is showing an example of Status Quo Bias that is completely orthogonal to losses/gains: People prefer inactivity to activity. If you construct an experiment where doing nothing constitutes the "loss" (e.g. keeping an item) and doing something is the "gain" (e.g. obtaining a new item) then you would expect people to prefer the first choice. For loss aversion to be a general principle you need to decouple it from the status quo bias.


Also, the title doesn't claim the evidence refutes it, just that it doesn't support it. Recalls the Carl Sagan quote, "Absence of evidence isn't evidence of absence."


This article is getting flamed pretty good. Here's another perspective I've learned as a software developer. Building on incorrect abstractions is much more damaging than redundancy in your code. It obscures the underlying design that you're trying to unearth and model, leading others down a path with a weak foundation. I think we're well served to narrow the scope of any scientific finding beyond what might seem reasonable. The human mind is a pattern identification machine, and it finds false positives in science as often as it does in religion.


I'm picking through the author's paper and I don't buy this conclusion. I have a bone to pick with his evidence: for example, he makes a comparison between "willingness to expend time to drive to obtain an accidentally left behind unused, new-condition notebook (vs. willingness to expend driving time to obtain a new notebook at no financial cost)". The extent of the former he denotes as WTP-Retain and the latter WTP-Obtain. Even if these two values are equal in a study, that doesn't contradict the principle of loss aversion. In the case where a subject has already left behind his/her notebook, the loss has already happened. How can this measure loss aversion when the loss has already occurred? I think it's good the authors are trying to tease apart action vs inaction from loss/gain but these conclusions don't seem valid to me.


He did that all throughout the linked article too: "People do not rate the pain of losing $10 to be more intense than the pleasure of gaining $10." Okay. That's not loss aversion though.

"People do not report their favorite sports team losing a game will be more impactful than their favorite sports team winning a game." Same.


How to you rate pain like this? There is no metric for pain/pleasure per dollar.

And there are other factors at play. Often the thrill is the dopamine release when you make these decisions — that’s the pain. If you’re intensely interested, losing $500 in Monopoly or scoring a run in baseball may be felt profoundly.

It all depends on context, not the means by which we measure an outcome.


Well the test subject has the option to 'undo' the loss. So the loss can be seen as optional for the purposes of the experiment.

Could we settle the argument by reading the definition of loss aversion to 10 people and asking each person whether the author's experiment measures loss aversion?


I do struggle sometimes to see how one could do proper science in the field of psychology. It always seems like every experiment contains multiple hidden assumptions, a myriad of uncontrolled-for variables and has more alternative hypotheses that fit the data than you could shake a stick at.

Is this me being all Dunning Kruger? Is it my positivist bias obscuring my vision? Have I misunderstood the nature of the scientific method or missed some major aspect of practical epistemology? I sincerely hope so as the alternative explanation regarding the nature and visibility of the emperor's couture is rather upsetting.

I do genuinely think I am probably - at least partially - incorrect on this. But I would like some help in shaking my sense of unease.


The loss aversion is almost trivially explained by the marginal value theorem. The subjective value of money is around log(money), that is, a meaningful change takes an extra 0 in the paycheck or in the price tag.

So, if you have $200, getting $100 one thing, losing $100 is way worse, since log(200)=2.30, log(200)-log(100)=0.3, log(300)-log(200)=0.18. A potential loss of $100 must be rewarded by a gain of $200 to "feel" worthwhile if you already have $200, since log(400)-log(200)=log(200)-log(100).


Something along those lines is what I recall reading in Kahneman's book. I just double-checked the 'Loss Aversion' chapter, and indeed he writes, and elaborates further:

"What is the smallest gain that I need to balance an equal chance to lose $100? For many people the answer is about $200, twice as much as the loss. The "loss aversion ratio" has been estimated in several experiments and is usually in the range of 1.5 to 2.5. This is an average, of course; some people are much more loss averse than others. Professional risk takers in the financial markets are more tolerant of losses, probably because they do not respond emotionally to every fluctuation. When participants in an experiment were instructed to "think like a trader," they became less loss averse and their emotional reaction to losses (measured by a physiological index of emotional arousal) was sharply reduced. [...]"


>Professional risk takers in the financial markets are more tolerant of losses, probably because they do not respond emotionally to every fluctuation.

Or more plausibly, because they're richer, and the utility they stand to lose is smaller.


Good reminder. Although subconsciously I realize that, I didn't remind myself that way and after re-reading that paragraph, I thought: "hmm, maybe I should up 'risk-seeking' dial a bit more"—which can be evaluated independently, though, depending on one's "risk profile".


Another experiment with similar results, that humans naturally think logarithmically: http://news.mit.edu/2012/thinking-logarithmically-1005


I don’t think that’s true. Marginal utility can explain the directional component but not the reference point component.

Let’s take the example from the comment above. A car salesman gets $100 in the morning and has to give it back in the evening if he doesn’t sell enough cars. Compare that to just handing out $100 if he’s successful in the evening. From an expected utility view both of these arrangements are equivalent.

Loss aversion argues that they’re not, because the salesman’s reference point for loss/gain shifts after obtaining ownership of the $100.


Hmm...

So here is an interesting thought experiment. Suppose you take a person with some appreciable intelligence (at least average) but no particular knowledge about a certain topic. In this instance, we'll let that topic be psycology.

Now we present this person with an unfortunate dilemma. For a particular hypothesis, they observe a significant amount of peer reviewed literature asserting empirical evidence in the affirmative. They don't have any real familiarity with any individual papers, but they can understand that there's an established consensus.

On the other hand they are given an article like this one, which mounts a critical refutation of all the established literature. Furthermore, this lone paper is presented to our stalwart examiner amidst the zeitgeist of a reproducibility crisis. Thus we have a mountain of peer reviewed but undigestible evidence on one side, and one readily digestible paper on the other which specifically rebuts the mountain of evidence.

The fundamental dilemma is this: how should this person examine the available evidence to maximize their chances of coming to the correct conclusion? Should they abstain from trying to discern the truth of the matter, and strike out any opinion they could have as unqualified? Should they read through the most comprehensive surveys of available evidence to come to a full understanding? Should they take the new, contrarian paper at face value?

There are a few dimensions here which (from my view) make the dilemma nearly intractable unless you 1) abstain from an opinion, or 2) become a subject matter expert. The cloud of the reproducibility crisis is a first dimension of uncertainty - not only does that muddy the waters for existig research, but we also have to be careful not take contradictory research at face value simply because it's contrarian. For another, we have to figure out how to weight the reliability of evidence against our own time. It's tempting to discount the evidence of existing research if an especially compelling critique is released, because we can more easily read it and follow its arguments.

It seems like this is enormously difficulty all around. How do we rate the critical correctness of differing amounts of conflicting literature published under academic uncertainty?


> The fundamental dilemma is this: how should this person examine the available evidence to maximize their chances of coming to the correct conclusion? Should they abstain from trying to discern the truth of the matter, and strike out any opinion they could have as unqualified?

> There are a few dimensions here which (from my view) make the dilemma nearly intractable unless you 1) abstain from an opinion

I think the majority of the time, you should abstain. I mean, you can certainly debate things, it's fun to do and improves your thinking. But you should be sure of very few "controversial" things. You should be very ready to change your mind about most things. There is value in just saying "I'm not sure" about most things.

How do I define controversial? Well yeah this becomes a circular problem very quickly. As others have said, practically speaking, you usually don't really need to use the latest social science research for practical purposes. Sometimes you do - but hopefully then you are an expert.

However, sometimes there are things you need to decide. E.g., what diet maximizes your health/fitness goals? A classic case where there is a huge lack of expert consensus, etc. In these situations, I usually try to defer to who sounds smartest in general to me, but keep a very open mind to the idea that I might be totally wrong.


In your thoughts experiment, make your refuting paper a slick Netflix documentary and you have the state we live in now.


That's a good point. In some social circles I've observed, it's become fashionable to have the latest contrarian research ready to cite for popsci topics. It seems like the new way to be well-read at a cocktail party.

Documentaries providing new narratives to cleanly refute the old ones - all the while promoting social awareness - seem to be very "in" these days.


There is no alternative to 1) or 2).

You either have to follow consensus of the experts, or, to go contrary to consensus, you must understand the consensus well enough to be one of the experts.

Down any other road lies pop-sci nonsense. It's incredibly easy to be a wrong contrarian, when you don't actually understand what you are attacking.


I think you're missing option 3, which is to avoid drawing a conclusion either way - assume that there's no definite answer, and that the experts' expertise is too narrow to support using their consensus outside small studies on college students buying and selling chocolate bars.

In this case I'm happy to conclude that it's not clear if people do systematically make poor judgements on important issues due to an in-built "loss aversion" heuristic.


What you call popsci nonsense is the way most of the people on this planet live. Making decisions based on the available data and living with the fact that we may be wrong. Nerds are a special class, and not better nor worse.


They are absolutely better at things they are experts on.

You don't hire a plumber to do a colonoscopy.


Not better or worse in terms of value as human beings


My bet is that the "mountain of evidence" is more specific and less generalizable than assumed.

The reproducibility "crisis" is not a crisis, it is a fundamental limitation of the scientific method for things that depend on a greater number of variables, on which a lot are unobservable and/or not known.


Your question is certainly interesting in an epistemological sense, in a way it is the basis of relativism.

Having read Larry Laudan recently however, I'm a big fan of his pragmatism which trumps the question a bit: just use whatever works.

In this case, we don't all need to be aware of loss aversion, and really I suspect barely anyone was using it in a practical sense. A pragmatist might say that it was only ever a theory, and this is why it thrived in largely theoretical exercises (or in the case of economics, in a context where the tangible consequences were extremely far removed from the application of the theory). But to me loss aversion still seems like an observation after the fact, though I did believe it at the time, which are the worst kind of observations ;)

In short, from where I stand there is no such thing as "correctness", only what has been successfully applied and in what context, or temporary applicability if you will. Any further interpretation is usually a case of extrapolating knowledge from a vastly incomplete picture. Being a psychology student, I look at its early history as a tragic example of why this is counterproductive. Some habits are hard to break though.


So what if the "certain topic" is something with no immediately obvious thing that "works", e.g. climate change?


I'm afraid I don't know enough to talk about it, but to me it seems more of a collection of observations than a "theory" in the speculative sense. As far as I'm aware though, we have observed that cities with less greenhouse gases tend to be colder, so in that sense lowering CO2 levels "works" to reduce temperature, though we can't speak of global "correctness" until we manage the same with the global temperature.

This is a special case however, in that we have to assume correctness because otherwise we'll all be much worse off, and the possible cost of reducing pollution are slim in comparison. But if anything I think this supports my point that correctness itself doesn't matter, only the material consequences do.


But if only potential consequences matter regardless of the odds, then you get to the problem of Pascal's Wager [0]: it's best to assume God exists, because the consequences of being wrong and not doing that (going to hell) are far higher than the consequences of being right and doing it.

[0] https://en.wikipedia.org/wiki/Pascal%27s_Wager


Pascal's wager is mainly faulty because it relies on a complete lack of information, unlike climate change where we have some information. If we had any clues at all that any existing gods are benevolent, it would most definitely be the right choice from a pragmatic perspective. If we circle back to climate change as an example, it's hard to be certain but that's still a lot of clues telling us we should do something.

I assume you're not headed into religious debate territory but I don't imagine pragmatists focus much on metaphysical matters (I know I don't).


Right, so how about AI then? There's a very small risk that AI would become extremely malicious and wipe out the entire race. In other words, the risk is infinite. Since there is some (albeit very little) information that this might happen, should we spend all our resources on preventing that?


I don't follow?


For me it works like this: Before reading this HN item, my understanding was that 'Loss aversion' was very likely to be correct - let's say 98%. After reading the headline and a few comments I lower the 'chance of correctness' a little to say 93%.

Two points on this approach: 1. My assessment lies on a gradient rather than a simple true/false. 2. I rarely read the source evidence - my assessment relies mostly on the assessment of others.


I think your goal should be not to judge to what degree each side is correct, but to learn a little about what people in the area are doing, what they did in the past, what they find important, and why. That way, there is a lot more you can learn and gain.


Abstaining from the general case truth frees up time for figuring out some interesting special cases.

I don’t know the best diet for the general population. I know roughly what works for me.

I don’t know how microwaves work. I know how my microwave oven works.


After reading the paper that this article was based on, this article and title feel sensationalized. It's not that the evidence of loss aversion was wrong, or statistically invalid, but just may point to different underlying factors or psychological mechanisms.

However, this is just a couple of researcher's opinions, and tomorrow a response article may come out saying that loss aversion IS supported by evidence.

To me, this is the sign of a healthy science.


Interesting how their review omits the paper that has conducted the most direct test of loss aversion:

https://pubsonline.informs.org/doi/abs/10.1287/mnsc.1070.071...

Also they claim the body of evidence doesn’t find support for loss aversion. But a meta analysis on the topic does in fact find support for loss aversion (albeit the magnitude is probably smaller than originally thought):

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3189088


I don't find the listed examples convincing. Especially:

> And people are not particularly likely to sell a stock they believe has even odds of going up or down in price (in fact, in one study I performed, over 80 percent of participants said they would hold on to it).

How naïve is that? We're not interested in what people said they would do. We want to know what they did!


I not only don't find his examples particularly convincing, but in many cases I don't even think they are really examples of loss aversion to begin with. I've always interpreted loss aversion as "It's worse to lose something you've already attained than to miss out on something you never had in the first place."

His example of "Messages that frame an appeal in terms of a loss (eg, “you will lose out by not buying our product”) are no more persuasive than messages that frame an appeal in terms of a gain (eg, “you will gain by buying our product”)" is extremely weak IMO. In both cases the consumer never had the product to begin with. The author is trying to argue FOMO is the same thing as loss aversion, which on the contrary, FOMO is really greed.


Have you paid attention to either of the past US market busts, in 2000-2001 and 2008? People overwhelmingly ride it into the ground because of two things:

- The fear of realizing a loss - The unrealistic expectation of upside

Also, you'd be hard pressed to find a stock IRL that you could predict in advance as "50/50" and see what participants do, wouldn't you?


It seems like it would be fairly simple to design an stock market simulation (with real money payouts) to test this.


Either way it is not realistic. People losing a few cents on what they know is a simulation is far, far different than people watching their life savings evaporate in an uncertain environment.

Besides, the point of that particular study isn't a stock, it's that they have a 50-50 chance of losing money they were just given. The stock is just a placeholder for anything.


> However, this process advantages incumbent theories over challengers for a number of reasons, including confirmation bias, social proof, ideological complacency, and the vested interests of scientists whose reputations and even sense of self are tied to existing theories. A consequence is scientific inertia, where weak or ill-founded theories take on a life of their own, sometimes even gaining momentum despite evidence that puts their veracity in doubt.

A wonderful, concise summary of some of the human obstacles to the progress of ideas, theories, stories and models about people and the world, scientific in nature or not.


On the face of it, and with the benefit of hindsight if the paper's presented theory is valid, if loss aversion as dominant motivation were a thing wouldn't human civilization have never progressed beyond hunter gatherer nomads?

Doesn't the fact that we keep growing and progressing as a species even tho the getting of these gains exposes us to risks, not just in the getting there, but once we have arrived, the "curse/paradox of development", suggest that loss aversion is not a majority principle of human behavior?

But... maybe...gain attraction / gain pursuit is a general principle...perhaps of all life, not just ours? Maybe that's one of the characteristics that tautologically has to define life. Life exists in an uncertain environment and couldn't continue to do so without taking risks / experimenting into that unknown.


Not really. Loss aversion doesn't mean complete refusal to face loss or complete lack of interest in gains, it just means that we weight losses more heavily than gains. I also don't think anybody's claiming it's a dominant motivation, just a common characteristic.


Interesting perspective. It doesn't really relate to how I think of it. Useful to know that's how you think.


> People do not report their favorite sports team losing a game will be more impactful than their favorite sports team winning a game.

For fans of winning teams (Warriors, Patriots, etc.) I'm not sure if this is true. They're expected to win, so watching them win can feel like nervous relief or ambivalence but watching them lose can feel like disappointment.

I think we see loss aversion in soccer too where teams will often play "not to lose" and "park the bus" rather than risk pushing forward and trying to win the game but being vulnerable to counter attacks.

In video games however most seem to favor aggressive styles of play and loss aversion is likely an instant loss.

I'm sure as with most social behavior loss aversion varies and depends on a complex set of conditions. I'm sure in some contexts it's certainly at play though.


I anecdotally agree with this. Since I learned about the concept of loss aversion, I've noticed instances where I have to consciously force myself to acknowledge opportunity cost, or the tangible loss dominates in my head.


The article's author David Gal claims that 'loss aversion' is a fallacy. But in the book Thinking Fast and Slow, Daniel Kahneman says that the following experiment demonstrates loss aversion.

-----

People are randomly given one of two problems:

Probem A: In addition to whatever you own, you have been given $1,000. You are now asked to choose one of these options: 50% chance to win $1,000 OR get $500 for sure

Problem B: In addition to whatever you own, you have been given $2,000. You are now asked to choose one of these options: 50% chance to lose $1,000 OR lose $500 for sure.

-----

According to Kahneman, many more people will take the gamble when it is framed as potential win (Problem A) rather than as a potential loss (Problem B).

So is Gal saying this experiment is not reproducible or that it doesn't demonstrate loss aversion?


Economics as a pseudo science of emphatic statements about human behaviour is looking pretty tragic.


Behavioral economics exists only because psychology has completely lost objectivity and focused on agenda. This article is a good example, uses cognitive ‘proof’ for a behavioral hypothesis. Trash.


I've long been skeptical of much of the behavioral economics literature. Not that it's necessarily wrong, but that the experiments are so contrived that they're difficult to generalize. But the pattern we see is, a contrived example with undergraduate students given $20 bills or something, and then cherry picking of anecdotal real world evidence.

It's easy to say that consumers and investors behave irrationally, it's harder to tease out hidden factors in a rational utility/loss function that may depend more on the expected value of one action.


I think the whole point of Thaler’s assertion is that loss aversion is a type 1 bias, so will be demonstrated despite whatever people say when using their type 2 reasoning.


Losing money makes you a loser, gaining money is normal, so beyond the actual event, there are identity things at play. In their terms, spending more on a product or less, is not actually touching that identity, because both are just being a spender. I agree about the consequences part: losing equals no roof vs winning equals extra vacation. I think the rest is about the identity that comes with losing.


Hypothesis: the set of people who want to see loss aversion refuted intersection (sic) heavily with the set of people who want to raise taxes


The author is the one "peddling" the idea that loss aversion is a phalacy but at least he seems to (ironically) recognize that his argument is in itself part of the social/argumentative part of science. Even so, this article seems to draw conclusions as if they were widely held beliefs (I'm not qualified to speak to these claims until I do my own research).


Isn't the bias in loss aversion the fact that losing 50 bucks out of 100 isn't symmetric to winning 50 bucks from a capital of 100? If I lose 50, I need to double my new capital to recover my former position, but when I win 50, I "only" need to lose 33% of my new capital. When I lose, relatively speaking, I'm further away than when I win.


That would be risk aversion; the fact that marginal utility of money/wealth/ gains decreases as one gets more wealthy.

Loss aversion is the “fact” that “buy now to save 10$” is less motivating than “buy now to avoid 10$ surcharge”, and that “here’s 100$; if the coin tails, you lose $50” is more distressful than “here’s $50; if the coin heads, you gain another $50”.


The Ikea effect is a form of loss aversion. This alone shows that someone hasnt done their homework.

"A bird in hand is worth two in the bush" is a popular saying with its equivalent in almost every culture.

Diversification which is studied, recommended and practiced by almost every investor, CEO, child... Is related to loss aversion.

There are many more real life examples of loss aversion.


Calculated strategies to prevent loss are not loss aversion as psychologists use it. Neither is preventing catastrophic "I spend the rest of life in jail" loss.


Calculated or not, loss aversion is loss aversion. The principle, thinking or motive behind diversification isn't investment growth. It's to prevent total loss. I'm other words, loss aversion.

You said nothing about the Ikea effect.

Here's more.

Why do people stay in abusive relationships with individuals & companies? I've put in so much, can't back off now. Scammers know and use this to great effect. Once you've paid, you'll keep paying.

Why do investors rush to sell winning stocks but stick stubbornly to losing losing stocks or trades?

The network effect is powerful because of loss aversion. "All my contacts, friends, pictures are in..." so I can't switch.

LOSS AVERSION CHEAT SHEET.

Ask anyone for the reason behind an action. If the sentence begins with or is dominated by, "I don't want" or "I didn't want..." the action was motivated by loss aversion.


This is not loss aversion and is explicitly covered in the article.


Whether my example was stated explicitly in the article is irrelevant.

What's important is the primary motive behind an action.

In fact, strategy in military, business and soccer is divided into two - offensive and defensive.

Offensive strategies are primarily motivated by gain. Defensive - by turf protection, prevent loss of market, or prevent a goal.

Both strategies use similar, virtually the same tools. And like loss aversion the difference is simply motive.

If China learns that Iceland is planning an attack on them and decides to attack first, it's a defensive strategy.

Again, the key is motive. It doesn't matter what the action is, what matters is why it's done.

If I kill someone for the heck of it (gain), it's called murder. If instead, it's to protect myself (loss aversion) it's called self defense.

Limiting loss aversion to just financial behavior is a myopic view of the subject.


The discussion is about loss aversion phenomenom as defined and used in psychology. It is not about what you intuitively guessed from how it sounded like to you.

But also, you horribly simplified military and soccer strategies too. Fun fact: army can decide for offensive strategy, because defense would end up in bigger losses. They may also go for defensive strategy despite bigger loss, bc some other reason.

Lastly, no not every kill to protect yourself is self defense. That is not how law and sentencing works.


I hadn't heard of The IKEA effect:

>"The IKEA effect is a cognitive bias in which consumers place a disproportionately high value on products they partially created." //

Which is a weird turn of phrase, as almost all IKEA stuff is already fully created, you just fit it together. I guess they mean something you put effort in to realising.

I'm not sure I agree, I think people preference stuff they took part in the production of (like kids helping with cooking their own tea), but I'm not sure we consider them higher value in an objective sense ... we often prefer things that we know to be of objective lower value, like in sentimental attachment.


I think you're confusing two meanings of value. There's an intellectual assessment of market price, a "how much could I sell this for" calculation. But there's also the observed, behavioral notion of value, which is derived from the actions people take. What you call "preference" is what people often mean by "value".


Nah, I think I'm considering non-fiduciary extrinsic value. Which is different to market value and different to preference-value.

For example (fictional), I have an old Pentium CPU, it has zero (general) market value as a functioning object; it has some cultural value as an historic artefact; it may be highly valuable to some geek somewhere, eg to run equipment they might otherwise not be able to run; it has sentimental value to me as my first CPU. My preference is unrelated to the intrinsic value of the item.


Contrarian science is important if it can successfully refute a claim, but has it in this case? Scientific American is sharing how the sausage is made (peer review science), potentially before the final product is ready.


It's not refuting a claim so much as narrowing it. I think we could afford to do a lot more of this, especially in sciences where we've made questionable amounts of progress.


Loss aversion, as it stands today, can be falsified in certain situations so its obviously not writ in stone. What is required is even more study of the context where it still holds.


a suggestion - loss aversion might be a manifestation of something else - namely poor decision making under stress.The loss causes the stress, which causes the poor decision making.


psychology as a discipline is not looking so hot these days


Maybe not, but please don't post unsubstantive comments to Hacker News.


I think this is a fair comment, it’s relevant and sparks an interesting debate about psychology as an overall discipline. Not sure it’s fair to call it unsubstantive.


This isn’t psychology though, it’s economics. It’s one if the most utilized sciences in the public and political world, and it’s really horrible at predicting anything, at least if you look at the statistics of how rarely it’s right.

Loss aversion, and risk aversion as well, are themselves the economic pseudoscience that are based on the psychology of compulsive habits.

This article doesn’t really depict the research paper though. The author isn’t saying loss aversion isn’t real, just that it’s been over popularized in a way that isn’t founded in evidence. It also doesn’t address the other economic pseudoscience on the field, so you could quite literally write the opposite article as well.

Ironically when it was likey the media representation of “loss aversion” that broke the term to begin with.


This is just a disagreement within a discipline, it's healthy.


I want to see what some other commentators who are more knowledgeable about the field say. This is a pretty striking attack against a central theory in... Scientific American? Without citing a wealth of evidence with detailed citations? I find the logic of the argument appealing, finding no inborn bias towards gain nor loss outside of what could be derived by reason would be a very nice thing to say about the future of humanity.

I read D. Kahneman's book Thinking Fast and Slow a number of years back and it did present some pretty clear looking graphs demonstrating loss aversion of 2:1 iirc. I've since lost my copy due to a friend's "borrowing". ;) Certain other elements of his book have come into question, including priming. I'm eagerly waiting to see how the cookie crumbles here.


I agree here. The article mentions ideological complacency, but complacency isn't enough to award 2 Nobel prizes. Loss aversion is, to my layman understanding, a well enough established principle that an article like this without some indication of widespread shift in expert opinion isn't going to change my view of it.


With the paradox of choice, most people are hypersatured with (the paradox of) choices, opportunities and distractions... so any “loss” seems inconsequential at the time because no one seems any more valuable than another. The key fallacy of this attitude is that the world is a small place, life is finite and will end. It doesn’t imply feverishly hustling every tiny opportunity, but filtering and carefully exploring choices with a sound decision framework.

Also, most people rarely make rationally-beneficial decisions based on other a myriad of fears, prejudices, laziness, negative cognitive distortions, distractions or inadequate future-planning. Worse, most people are sold on hype, feelings, gossip, peer testimonials and appearance when they can’t be bothered to do due-diligence.

OTHO of gain avoidance/scarcity: You can put a “free” sign on a decent household good, set it out, and it won’t move... put a price-tag of $100 on and watch it get “stolen.”

tl;dr: The human condition is messy and imperfect... there’s no firm solution except thorough qualitative hypothesis testing via experiences.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: