Prediction markets aren't effective because they average over a big crowd; prediction markets are effective because they provide a financial incentive for people who know nothing to bow out of the discussion, leaving only the confident people to form a market consensus.
While you're correct that real money markets will do a better job of filtering the less knowledgeable, play money markets are still able to hold their own against real money PMs. There have not been many studies comparing the two, but what few there are find play and real money markets do nearly as well as each other - with caveats.
Both fake and real money markets offer better than baseline predictions but real money prediction markets do better when signals are less clear, i.e. no odds markets to serve as a basis. And according to the markets studied in "PM: Does Money Matter", the discrepancy is eliminated when play money markets weight based on confidence (amounts bet this takes past success into account by noting larger bets also reflect wealth which can only be accumulated by past success in play money settings).
Even more than money, the key factors in Prediction Market Design are the availability of tools which enable the easier aggregation of information and strong incentives (monetary or not) for being correct.
I'm not sure it's the same here -- the write-up doesn't mention providing a reward based on correctness. It just says they're weighted based on how little their opinion moves when learning the group's beliefs.
EDIT: I stand corrected. As klochner points out, there was a monetary incentive to get it right, so this is like prediction markets. Not sure why the article doesn't mention this critical part. https://news.ycombinator.com/item?id=8032493
I believe the parent is actually saying that the financial incentive is used to select people who know what they are talking about.
It's not necessarily a 1:1 match, but basically that selects for people who believe that they know what they are talking about (and thus are less likely to be swayed by the group opinion). To the extent that the current price of a proposition in prediction markets represents the group opinion, you only expect to win the bet if you think you know better than the group opinion.
They may not be exactly the same, but a prediction market provides a quantitative way to measure confidence at both an individual and group level without requiring people to revise their estimates at a later time. This is arguably superior to a weighting based on differences in opinion prior to exposure to the consensus of a different group, and after exposure to that consensus.
Seems like in this study though there is no incentive for those "who know nothing" to bow out since there is no penalty in guessing wrong.
I'd say that the population that constitutes the prediction market should be "confident", while the population that guesses right in this study would be described as having opinions not influenced by the opinion of others (or at least its mean).
I guess that the two are very similar but you can be also very confident in an opinion that you have changed based on the opinions of others ;)
Caveat: In this context, "confident" means "updates their belief less after learning the group opinion" -- i.e. continues to believe what they did before even after knowing the group disagrees.
EDIT: per klochner's correction, the following is wrong, and they did get a monetary incentive, though it's not mentioned in the linked article (only the paper), which is a pretty significant difference!
Note that they did not provide a material incentive for accurate guesses, so I'm honestly surprised that this kind of confidence correlates with correctness. It requires that this feeling of confidence is more likely to happen in people who really do know better compared to those two don't know better. (Per the writeup, if you weight each guess in the average by confidence per this metric, then you get even more accurate answers than the simple mean).
Subjects received monetary payments for each good estimate.
Possible rewards were 4, 2, or 1 points if their estimates fell into the
10%, 20%, or 40% intervals around the truth; otherwise, they
received no points. The rewards applied to all rounds to make sure
that individuals took all of their decisions seriously. The correct
answer and the rewards for all
five estimates were only disclosed to
the subjects after the
fifth estimate. This reward structure in-
troduced incentives only to
find the truth and avoided incentives to
conform with others. Furthermore, there were no incentives for
strategic considerations. For example, there was no benefit of being better than others or of misleading others, because this did not
affect individuals
’
payments. There was also no possibility to help
others by deviating from the strategy to
find the best estimate.
Therefore, our experimental design put subjects into a situation in
which they would try to get as close to the truth as possible by using
their own knowledge and the estimates of others.
Seems to me you're proving the point of the article. Rather than contributing independent critical thinking, you're more motivated to pile onto the comments of others, to the point of not even bothering to understand the comment you're piling onto.
I wish the headline made it clear that the 'confidence' it refers to is not the personality trait associated with risk-taking and leadership but the more specific 'confidence' and stubbornness in one's own opinion that one has on a topic when one believes that their opinion is founded on valid information related to that topic that few others are aware of while they were forming their opinions.
FWIW, I did it in one line [1]. But even better, I think the term "stubborn believers" is a better term and would have decreased the confusion, but note that it requires stubbornness plus disincentive for being wrong to correlate with correctness.
I would add that they are stubborn in their estimate of an objectively known quantity knowing that their estimate will be compared to the "correct" value. In a situation where that threat/reassurance is absent, the incentives could be entirely different.
(Using the length of a border defined by natural features [http://www.economist.com/node/13496212] as an objectively knowable value might be problematic, but I believe that was the intent.)
Is there any difference? Surely the reason a person takes risks is because in their own opinion it's not actually much of a risk at all due to their own confidence in their opinion. No?
Did I interpet something wrong, but isn't "confidence" here something like "inverse herd mentality"? As in, while herd mentality[1] is "Herd mentality, or mob mentality, describes how people are influenced by their peers to adopt certain behaviors, follow trends, and/or purchase items.", here "confidence" is exact opposite of that, right?
This is very interesting though, because for example in some subreddits score of a comment (or submission) isn't shown right away, but after a defined delay. This is to combat herd mentality, so that people do not vote something just because others have voted it so-and-so. Instead, it sort of forces individuals to make up their own opinion and vote based on it, thus it sort of forces individuals to be "confident". It doesn't let them delegate the opinion forming to the herd.
This is research? I'd summarize the results as: "People who actually know answer to an objective question are less likely to change their mind when presented with an alternative than those who don't. Thus one should value their opinions over those of people guessing."
Its more than that. This study does not isolate based on knowledge or know how as in 'people who actually know answer to an objective question'. This study isolates based on influence of 'social' stimulus.
But the thing is, they specifically don't isolate 'social stimulus'. They offer new information, and those who knew they were guessing revised their estimates. Those who had a good idea of the value did not. There is nothing 'social' here at all.
A better control would be to ask them how sure they were of their answers, and use that to adjust the bias.
Two other interesting recent articles on crowd behavior:
1) Why can't we be better at predicting successes? (movies, music, etc.)
Turns out when the crowd has social information thrown at them (previous number of downloads shown in listing) the ability to predict a success based on quality gets very unpredictable. http://www.filosofitis.com.ar/archivos/experimentalmarket.pd... Social data is a strong force.
2) Rice vs. wheat.
Wheat farming might be a reason why the industrial revolution started in Britain vs. China. Even though China was highly educated, people working in rice farming depend on large groups working collectively. A culture of collectivism then forms. Collectivism has been shown to breed less innovation than cultures where people are more individualistic (wheat farmers). http://internationalpsychoanalysis.net/wp-content/uploads/20...
Thought a discussion of Wisdom of Crowds might enjoy those if you haven't seen them.
The industrial revolution starting in Britain as opposed to China is almost certainly completely unrelated to farming methods, and much more related to politics. China during the industrial revolution was in the grip of a very weak, corrupt, and financially mismanaged dynasty. This set the stage for the Opium Wars and continued decline for the next 200 years, a period which has only recently ended. Prior to that, China was one of the most advanced civilisations in the world for most of its history.
Also note that the staple food of most of Northern China (including Xian, Beijing, and huge amounts of the population) is and has historically been, wheat based.
Its always key to take these kind of 'indirect' causation in anthropological fields with a huge grain of salt. It might be true but it is not really 'science' if the hypothesis can't be tested quantitatively.
I think there is an important difference between situations where the truth is objective (and can and will be measured accurately and fairly when the payoff is determined) and situations where the truth is "made", for lack of a better word.
The best example, of course, is financial markets vs weight of an ox. In financial markets, we are often dealing with a "Keynesian beauty contest" (http://en.wikipedia.org/wiki/Keynesian_beauty_contest). I would doubt that those whose beliefs are insensitive to data concerning the crowd would do well, at least in the short to medium term.
Whereas in weighing an ox at a fair, that truth is absolutely orthogonal to what the crowd thinks (granted, the odds of a bet structure may vary, but the study didn't get into such things AFAIK). There would be no benefit for an informed observer to dilute his prior with the crowd's ideas, so having a concentrated prior may signal experience, inside knowledge, etc. much more clearly than in the above situation.
(where anti-Bush folks mocked Team Bush's supposed irrationality, ignoring the aspect that truth in human sociopolitical issues is not independent of actors' choices)
Running through a chain of articles and excerpts from Lakoff and Luntz one week seemed to work wonders for making sense of difficult sociopolitical conversations with certain friends.
The famous ox weight example demonstrates the wisdom of crowds, but doesn't seem applicable to this article. I doubt that fairgoers who entered into the contest were conferring each other on the guesses enough to influence each other, as opposed to say debates on public policy.
that's why the wisdom of the crowd works better for guessing a slaughtered ox's weight at the fair than it does on HN or Reddit. There is obviously a very large group of very smart individuals who regular HN, to disagree comes with the fear of looking stupid and being downvoted
Wait... are you actually, genuinely afraid of being downvoted? It's okay to be wrong sometimes, dude! Sometimes being very wrong is the only way to learn and help you further down the road. You win some, you lose some. If you're afraid of being wrong on the internet anonymously I can only imagine what it's like in the real world...
Maybe this speaks to the timidity/introversion of many HN posters but remember we are also entrepreneurial -- gotta take some risks!
haha check out my post history, I am far from afraid of being downvoted, I actually feel like I usually know when I will be downvoted and I still post. I would guess most others would not and that affects the overall quality in a negative way.
I would like to see the outliers from the experiment, see if there was a large group of individuals who were overconfident and made their estimates way off compared to the crowds.
http://en.wikipedia.org/wiki/Prediction_market
Prediction markets aren't effective because they average over a big crowd; prediction markets are effective because they provide a financial incentive for people who know nothing to bow out of the discussion, leaving only the confident people to form a market consensus.