Prediction markets would influence the processes they're supposed to be impartially observing.
When there's a lot of money riding on something happening, it tends to happen. Sports and traditional financial markets have been dealing with this problem since forever.
Richard: They say Bin Laden is just going to put all his money in the market and then attack?
Robin: Well, that was crazy because these were relativity thin markets, and they have a lot of money at stake. Basically a fact that people don’t know about the markets is that many people criticize by saying, “Well, somebody will try to manipulate the markets by betting on one side not because they know better, but because they’re willing to lose money in order to distort the market price.”
That is true. There are people willing to manipulate markets, but that actually makes the prices more accurate. For example in the fire the CEO market you say, “Well, the CEO wants to keep his job, so he will bet in these markets in order to make himself look like the price will be higher if he stays, and lower if he leaves.”
Yes he would have an incentive to do that, but when other traders know that somebody will be trying to manipulate in the market they know to increase their trading and their efforts and that compensates, and actually on net makes the prices more accurate. That’s something we see in theory and we’ve seen in the lab, and we’ve seen in the field. These markets are robust to attempts to manipulate. In fact people who want to manipulate them make the prices more accurate.
Oddly enough, in financial markets people do not rely on self-correction for manipulation, but rather there is a tendency for it to be illegal. Market manipulation is in general not believed to make markets more accurate.
Any kind of the suggested self correction here relies on the other traders having sufficient funds/capital/risk taking capacity to overcome the manipulator and to coordinate their views to some extent. In the CEO example, how would other traders even now that the CEO was making the bet? What if some other informed party made that bet? In small, open, and rather serene settings that might work out, but in a real market with fast movements, market makers, etc.?
> manipulating reality in order to win money in the market.
Analogy: working extra hard to make your company succeed because it'll increase the value of your company stock.
I think the idea is that these prediction markets are set up in a way so that the outcomes are generally deemed desirable (increase GDP for example) and it's OK to manipulate reality (do work) to achieve them while making money.
1. Government body says "We are considering adding full Github-flavored markdown support to Hacker News. Please establish a market on whether or not this will raise GDP by 1% over the next year."
2. People place their bets, either in the "Yes please" pool, or the "No to markdown" pool.
3. "Yes please" gets the most betting money. Betting period ends. Government decides to add markdown. People in the "No to markdown" pool get their money back.
4. If GDP does in fact go up over that 1 year, people in the "Yes please" pool make money. If not, they lose their bets.
There isn't a mechanism to make money by sabotaging GDP.
> If GDP does in fact go up over that 1 year, people in the "Yes please" pool make money. If not, they lose their bets.
Who do they lose their bets to? If the answer is the money is just set on fire or some equivalent (which I think is actually a mechanism we should use more of when it comes to things like fines), where does the money come from if they win?
That article doesn't deal with the issue of directly manipulating the thing measured by the success metric (and implicitly assumes that it is unmanipulatable).
To take the example given in the article for bailing out banks and GDP, as soon as the "yes" trades are reverted and the "no" trades are confirmed, we collapse exactly into the scenario many other commentators on this thread have talked about. Now, all holders of negative amounts of "no" tokens are incentivized to decrease GDP in 10 years because this increases their profits at the expense of holders of positive amounts of "no" tokens. The argument is presumably that there is an equal incentive on the other side to increase GDP, but that's a fragile assumption (since in the real world betting market we still see fraud in the direction of those with power, even though in theory you could have fraud "pulling in both directions") and still leaves open the more general fragility-of-value problem (as the article refers to it later on), namely that someone can manipulate GDP but doesn't expose this when betting occurs (which again happens in real-world betting markets), which I'll talk about in a bit.
(Although if someone could explain "after ten years everyone holding the asset on the “no” market gets $26.20 apiece." that would be great, because I think that's a typo and the $26.20 just exchanges hands immediately, or alternatively everyone who's sold the asset gets $26.20 apiece rather than those who hold it? That's however irrelevant for the larger point.)
The problem we're talking about is basically the same one as a problem the article itself points out later.
> A futarchy-as-government, especially if unrestrained, has the potential to run into serious unexpected issues when combined with the fragility-of-value problem... Of course, in reality, futarchies would patch the value function and make a new bill to reverse the original bill before implementing any such obvious egregious cases, but if such reversions become too commonplace then the futarchy essentially degrades into being a traditional democracy.
The problem here can be recast as a version of the fragility-of-value: the value function is no longer accurate because it has/can be manipulated. But the half-solution that the article hand-waves, namely "futarchies would patch the value function and make a new bill to reverse the original bill before implementing any such obvious egregious cases" is doing a lot of work here and should be viewed with a great deal of suspicion.
It's not as relevant for the article, which explicitly points out it's not advocating for futarchy as government (or at least not for all governance rather than e.g. just party selection), which is probably why it's hand-waved away, but if you care about futarchy as government this is extremely important and it is not at all apparent that this "patching" would occur, especially given that the financial incentives are magnified vs a traditional democracy and there is potentially no way of knowing the value function is being manipulated, up until the very moment it is manipulated (and even then it may not be apparent that that is happening!).
In a way, truly solving the fragility-of-value problem is basically solving the same problem as AI existential risk and I would assume most people in the futarchy community agree that the latter problem is a very difficult problem. A failure mode of futarchy can then be thought of as a "monetary AI" completely optimizing for the wrong thing in spirit, even if it's the right thing in letter, e.g. manipulation of the value under measurement.
How do the No people win in this scenario? What happens if Markdown is added and GDP goes down? How do you resolve that markdown actually caused the change in GDP? (Aren't you just asking people to conditionally bet on GDP going up?) Who are people betting against if No loses? (Who's money do they get?)
Wait, that can't work; it's comparing against a counterfactual. Let's say Hacker News gets markdown. Also COVID disappears at the same time. GDP goes up by 1%, but maybe that's because COVID is gone. How do we know what GDP would have done if Hacker News didn't get markdown?
>How do we know what GDP would have done if Hacker News didn't get markdown?
We do not, but that's not the purpose of a prediction market.
The purpose of the market is not to prove that the change caused the result, but to choose the best path forward in a world of uncertainty.
You don't have to have a perfect crystal ball to make useful predictions about the future. Yes, sometimes a black swan event will cause a "good" prediction to not pan out, but on average if you can make better-than-chance predictions about the future and policies based on those predictions will personally gain predictors value, then people are incentivized to make good predictions on average.
Just like someone who can count cards won't win every blackjack hand, if they play for long enough, they are a favorite.
Sure, but the point is that with futarchy as proposed you don't actually have a prediction market on whether the policy is good, you have the government looking at the delta between markets for and against a policy, and the one for the policy which isn't enacted gets voided so anyone manipulating its price loses nothing.
Big Markdown shorts the "No Markdown" policy, it doesn't get implemented and they get their short positions back again.
There's really no prediction market incentive to bet against them: so long as they keep pumping money in to move the rate, none of your bets on the correct rate of GDP without Markdown will ever get paid out on. They only need to bid it one basis point below the price of the With Markdown GDP futures contract to get their policy, so even if you have deep enough pockets to outbid them and good reason to believe Markdown has no effect on GDP, you'd be risking a lot (GDP is pretty volatile and expert forecasts are regularly more than a basis point out in either direction) to win very little if they didn't get their way.
Maybe I don't understand the bets because they weren't clearly spelled out. Even if HN markdown was actually what held GDP back from 5% growth, you wouldn't be able to tell.
The government asked for market "on whether or not markdown will raise GDP by 1%". Presumably, they want to know if it gets an additional 1% on top of other growth (which is what? Let's say 2%?).
But there are four possible future outcomes to consider:
1. HN gets markdown, GDP grows 3% or higher
2. HN gets markdown, GDP grows less than 3%
3. HN gets no markdown, GDP grows 3% or higher
4. HN gets no markdown, GDP grows less than 3%
Of these possible outcomes, two will be discarded based on the decision to implement markdown, presumably made based on the betting odds, and then one more will be discarded based on GDP's measured performance over the following year.
Let's structure this as a bet. If you believe markdown is good, then you'll believe P(1)+P(4) > P(2)+P(3).
So you say: "I bet that either HN will get markdown and GDP will grow 3+%, or HN won't get markdown and GDP will grow less than 3%". I don't like markdown so I consider taking the opposing side.
But, I also believe COVID vaccines are going to cause GDP to grow 10% next year, and markdown will have a trivial impact in either direction. So in my opinion, P(1)+P(3) >> P(2)+P(4).
Then in that case I'm mostly betting on whether or not I think markdown is going to get implemented, not whether I think it's going to be beneficial. And if the implementation decision is going to be based on which side bets more money, then that's mostly a popularity contest. I just want to bet alongside whichever side is winning if I think the GDP is going to go up anyway for other reasons.
I think that your example largely shows that it's not useful to make prediction markets about GDP for policies that have effectively no impact on GDP.
If HN wanted to run a prediction market for features, they should use a measure like user growth, or average score of posts that use the feature or something. Something directly related to the feature in question.
But presumably there are policies that have an impact on GDP, right? A prediction market there is going to function properly. If you want to know what tax rate or pandemic policy the government should implement, it will absolutely have an impact on GDP.
Wouldn't this also open up to foreign influence dumping money and misinformation to promote negative policy decisions.
Or not even foreign influence. It's just rephrasing money = speech. Instead of spending millions to influence voters, you spend millions on positions, and you get your money back if you don't win your side? So it lets the megacorps/trade groups get richer by removing the uncertainty of lobbying in place of buying policy once you hit a certain critical threshold of money to throw around.
Yes, plausibly. Maybe you could limit this by requiring bets to be made by individual citizens (in the case of government policy markets) or by shareholders (in the case of a corporate market) or something.
There are still avenues for corruption, but it's not clear to me that they are worse than the present state. There is of course always the chance that those attempting to manipulate the market will lose their shirts.
That's where Hanson's idea of futarchy (as opposed to prediction markets) comes into play. You don't bet on an event A, you bet on A given B or on A given not B. So the prediction market wouldn't be for an asset paying $1 if there's a streaker, but for two assets: one which pays $1 if there's a streaker given there's 100 or more security guards and $1 if there's a streaker given there's fewer.
I don't see how this helps any with the underlying problem. All it does is introduce an additional incentive for another party to manipulate the outcome in order to win money by betting: whoever runs the security guards.
So the potential streaker has one extra step, discovering the amount of guards, before placing the bet? How does this solve the problem instead of creating even more of a plutocracy? Gaming the system will always be possible, except for a smaller amount of people.
Well, but the reason we want to decide the number of guards in the first place is that we want to prevent a streaker. If, in order to decide the number of guards, we use a process that increases the odds of a streaker, isn't that counterproductive?
>Prediction markets would influence the processes they're supposed to be impartially observing.
Who said anything about impartiality (/cordially a strawman)?
Changing behaviors is a feature not a bug. Your health insurance writer (who's bought "No optimalsolver will not get sick") loses money if you do in fact get sick. Your fire insurance writer has an incentive to provide you free fire inspections because it reduces their payouts. A farmer plants a lucrative but fragile crop because a meteorologist can better price weather risks than they can.
Swapping exposure across space and time is a productive act.
Swapping exposure is only productive if you have many guardrails outside of the system that absolutely constrain what actions players inside the system are allowed to do. Otherwise, and this has been borne out it in reality, you end up with very distorted incentives.
This is covered today by what we call fraud (whether that be insurance fraud or market manipulation fraud), which tries to set bounds on what acceptable behavior is so that you can try to eliminate pathological edge cases. I don't see how this would be handled if everything at a top-level is handled through prediction markets.
More important than guard rails (which makes a value judgement on what actions are "allowed") is a way to determine how to settle the thing that bets are being made on ("did these actions that have stakes on them, actually happen"), auger[0] does this by:
"Once a Market’s underlying event occurs, the Outcome must be determined in order for the Market to Finalize and begin Settlement. Outcomes are determined by Augur’s Decentralized Oracle, which consists of profit-motivated Reporters, who simply report the actual, real-world Outcome of the event. Anyone who owns REP may participate in the Reporting and Disputing of Outcomes. Reporters whose Reports are consistent with consensus are financially rewarded, while those whose Reports are not consistent with consensus are financially penalized."
Where the market here can take these states/phases:
> Pre-Reporting
> Designated Reporting
> Open Reporting
> Waiting for the Next Fee Window to Begin
> Dispute Round
> Fork
> Finalized
The alternative to putting your money into the prediction market to influence the outcome is to try to influence the outcome in some other way (e.g. a disinformation campaign).
At when if bad actors put their money into the prediction markets, they draw further attention to their false claim--something that can't be said for the alternatives.
This is true, but in practice a very small problem.
People who can control or majorly influence these big outcomes - elections, sports matches, acquisitions, etc. - generally have a much, much larger stake in the outcome itself than any side bets.
A lot of dynamic Econ models have some condition where my choice today depends on an expectation of the future. Like how much I choose to eat today depends in part on my best prediction of how much I’ll eat tomorrow. For aggregate decisions, getting better predictions should improve welfare.
Imagine a prediction market had an entry for whether or not someone would streak across the field during a Super Bowl. Someone might see this, buy into the market, and then go streak across the field to force the outcome to favor their position.
Prediction markets give financial incentive to force specific outcomes. They aren’t just observations: They become incentives to influence the outcome. The bigger the market, the bigger the incentive.
I feel like this is left as an exercise to the reader and no proof is needed. Bettors in a prediction market are not divine speculators causally divorced from the real world. They are embedded, and when there are a lot of them and their financial incentives are towards a particular outcome, they might act in ways that aggregate to a greater likelihood of the event transpiring than in the counterfactual setting where they are pure observers of the simulations waging in a vacuum. It seems to me there are multiple ways to formalize and prove this and this contributes to my perception the original comment seems self-evident. If that is not the case then something is wrong in my intuition.
You can find many simple but observed real world examples of unintended consequences from indexing on observation by searching for examples of Goodhart's Law (which, as commonly generalized, should actually be called Strathern's Law):
"When a measure becomes a target, it ceases to be a good measure."
Years ago there was a libertarian proposing this as "assassination politics": simply create a system that allows people to bet that "politician X will be assassinated before time T". People who want X assassinated take the other side of that bet. Eventually there may be enough money in the pot that someone considers it worth making the hit and collecting the bet, and the people on one side have paid for it without directly paying for it.
Yes. The market facilitates payment for crime, e.g., you can't buy a company's secrets from employees directly but in a free predictions market leakers would be paid. Looks like a cynical way to expand the scope of what you can buy with gobs of money, beyond the current boundaries of law, decency and fair play.
>you can't buy a company's secrets from employees directly but in a free predictions market leakers would be paid.
Conversely though it creates an incentive for information to flow to everyone in the market instead of just to insiders. Despite shitty motives, it seems like it forces more transparency which seems like a net positive.
There was a prediction market on the number of times celebrities tweet a week. Someone found one of the celebrities live streaming and kept paying them to delete their tweets.
Markets are not based on fixed principles but are basically a game of psychology mixed with a few principles. So expectations play a big role in market outcomes.
If I had 500M to manage, I may just buy a lot of real estate in a specific place to drive up the prices in that area. I want prices to increase and with money I can create circumstances that would favor my preferred outcome.
The solution seems simple: predictions should be blind until the deadline. This also exploits the "wisdom of the crowd", where crowds are smarter when each individual's decision is independent of the others.
The goal isn’t to have people place bets and see who is right later. The goal is to expose the predictions to market forces and make people put their money on the line, thereby (theoretically) improving the quality of predictions.
If you reduce transparency you also lose out on a lot of utility that centers around the wagered amount being an indicator of certainty.
Suppose you're using prediction markets to encourage thorough code reviews, specifically with an eye towards catching malicious commits. Run of the mill non-malicious PR's get lots of little yes wagers, and are merged without exceptional scrutiny from the package maintainers. Then a malicious commit comes along and a reviewer wagers $100 on "not merged". This captures the attention of the maintainers and they give the PR extra scrutiny. Turns out it has a malicious commit, so it doesn't get merged and the reviewer who found the flaw is rewarded with the money wagered by those who didn't find the flaw (plus some from the stakeholders, who seed the market with some "no" money to encourage participation in the game even in boring non-adversarial times).
If you hide the predictions and the amounts, you can't use the unsettled bets as inputs for decision making.
What do you mean by predictions are blind? The issue the parent is describing is (as an extreme case): I go on a prediction market for when someone will die, put all of my money on tomorrow, and then kill the person tomorrow.
You could imagine a lighter version where I ruh for some public office, bet a billion against me winning, and then drop out of the race.
The latter example is easily taken care of though, because there has to be a market to take your bet if the market has no reason to believe you would win your odds are really low thus your winnings over your billion are really low. On top of that your risk is not symmetric as you've exposed your self for some to now spend what millions to campaign for you and thus bet against that pool.
My point is your example is contrived and not really useful.
The price of a prediction depends on what people are currently predicting. So you can't keep it blind. Well I guess one way would be to put in limit orders on odds you're willing to take, and not revealing the market price. However I don't know if such a market condition / feature would have people using the prediction market.
None of these things are are really new, but I wish folks like this would better understand what makes markets work:
Markets need participants, capital, liquidity, effort, etc. to become efficient. A lot prediction markets suffer from the fact that volatility in them is limited and that the outcome is not of interest to many people, so the markets cannot really do what markets are supposed to do.
If markets are aggregating crummy data and information, no useful market will form. And that is observable even in much more developed financial markets, where some things really have trouble on price formation. And then you might use auction methods, for example.
Similarly, if stuff happens rarely and you only get very limited shots at being right, then averaging and aggregation or not so useful because you cannot get an average (e.g., can do one thing for the next five years and better be right, for example).
Adding to your point, most successful markets are positive-sum -- hedgers gain value from mitigating their structural risks, and speculators get paid to assume price risk. For example, the wheat futures market has two natural participants -- farmers and bakers. Farmers can sell future produce to buy seeds right now. Bakers can hedge their wheat price exposure to reduce their chance of getting ruined by a bad harvest. Speculators get paid to hold onto wheat futures contracts if a farmer wants to sell a future when the bakers aren't around to buy (presumably baking), selling to a baker later for a higher price reflecting the price risk assumed.
All of these participants derive value from the market.
It's not clear to me that prediction markets usually have natural hedging participants (maybe political operatives, but the tx costs are probably too high relative to the value at stake).
Prediction markets have been a thing for at least 25 years. I get the intellectual appeal, and they may be useful in certain niches. But I think their lack of significant uptake or impact is telling.
Prediction markets have huge positive externalities, as they help non-PM participants predict the future! One problem PMs have is that non-PM participants often feel that people betting short (or in some cases long) are 'hoping for failure'.
Even when you have a market with immense amounts of liquidity, the market won't be efficient if all the participants are wrong.
I remember back in 2012 InTrade had a market for the US Presidential election, the odds were wrong, stayed wrong for months, and actually got more wrong close to the event. You could get hourly liquidity in the tens of thousands almost throughout (I put on $20k in this market, I wasn't touching the sides, it was incredible).
Most of the people who talk a lot about prediction markets haven't worked in markets. Markets aren't magic. They work better with binary outcomes but they cannot be smarter than the people making bets in that market...and they aren't (I have most experience with financial markets, which just don't work well at all, but have quite a bit of experience with binary markets too...they have only become more accurate as our knowledge about the underlying events increased...if you look at binary markets where knowledge is limited in some way, markets are not efficient).
Seconded, predictive quality of markets is not at all a given. For example, financial markets can have real arbitrage opportunities in size open that no-one removes (if capital requirements are high/volatile) - so no unique predictive state then
Yes that's right. The price of a futures contract is not a prediction of the price of the underlying commodity in the future. It's simply today's clearing price between hedgers and speculators. If a hedger feels they are underhedged, they are going to buy (or sell) more of the futures contract until they feel safe, and won't really care about the impact on price of their hedging action. Likewise, speculators can bid prices way up (GameStop?) with little regard for fundamental value today or in the future. It ain't magic...
| A lot prediction markets suffer from the fact that volatility in them is limited and that the outcome is not of interest to many people
+1. This is why I'm more interested these days in prediction markets within (large) institutions, like tech companies, universities, or government agencies.
That way, the participants have (a) a shared domain of interest, and (b) can speculate about internal, nonpublic matters.
Perhaps most importantly, if the prediction market improves the wisdom of the institution, the participants benefit - even if they aren't profitable in the market itself.
It’s a bit presumptuous that these folks don’t understand what makes markets work.
They went into quite a bit of detail but can’t cover everything. Most of the examples were quite top of funnel, fire the CEO markets, economic impacts of new laws being passed. I wouldn’t expect for major bills or companies there would be any shortage of liquidity there.
It’s typical of comments in any forum to mostly be critical, but what takes more guts and cleverness is to connect the dots to improve upon the idea. You seem to have a good mind so I’d encourage you to try applying it in that way.
Bit presumptuous to assume I have not been involved with the creation of markets, trading, etc. I have seen stuff work as well as fail up close in large arenas. [Edit: sorry, I should not snark. You are right that criticizing is easy creation is not.]
Going from academic ideas of markets and experiments to actual deep and useful markets is surprisingly difficult.
EDIT: the failure to create proper working markets for GDP-linked derivatives is good example of something that should but actually ain't. Not enough market making risk takers, limited hedges, unbalanced demand between long and short demand, index issues, ...
I didn’t assume that of you at all, I actually had assumed the opposite and that you had experience in the area.
I agree it would be challenging to have deep and useful markets, but perhaps there’s more innovation to be made there — and many of these high level markets that are top of public consciousness I’d expect to have plenty of liquidity.
In 2005 when I joined Google, they actually HAD prediction markets (no $$ involved) on business questions like "Gmail will have 50 million 7-day actives on April 1." It seemed so hip and with-it. Top management can get the real truth about things, rather than just asking their underlings!
The markets didn't take off, they withered. AFAIK they don't have them anymore. A friend of mine actually created an open source project for setting up your own prediction market. We can ask him how many users he got.
We've had betting markets on politics for a long time, too. I lost a few dollars on Amy Klobuchar in the 2020 primaries, although I had a 3x profit for a while (didn't sell, damn it).
So the interesting question isn't "is this a good idea?" but "why hasn't it taken over the world already?"
=====
"Richard: Do you put futarchy in a larger intellectual tradition? Because a lot of people when they're coming up with an idea... Did you come up with this term by the way?
If there was no money involved it sounds like Google actually didn't have prediction markets. This is like saying that because nobody will take care of my lawn in exchange for Monopoly money lawn care businesses can't exist. It might be different if people got paid!
Prediction markets are probably not a panacea, but Google's various internal implementations of them (there have been several 20% attempts) are far from proof of this. If the people making the predictions don't have any actual skin in the game, the economic incentive to be correct is removed. That is the secret sauce that should supposedly make prediction markets more accurate than other forecasting venues.
People DO bet where "ego points" are the reward, and my own personal proof is the Hollywood Stock Exchange [1]. For several years, I won my movie Meetup group's Oscar pool, just by taking all the predictions from HSX. (They don't do all the minor awards, which is a problem.)
Actually using real money is a major legal hurdle, and that's why most prediction markets don't do it. The political markets have a special "research" dispensation, last I checked.
Isn’t this automatically inflationary? Just joining adds $2 mm Hollywood dollars to the pool. And you can join multiple times? But it does work clearly so I must be wrong about something.
It has been a success as far as I'm concerned, in that it's still up and running almost 20 years later, and it has generated a lot of discussion about long-term topics, which is our goal.
But at the time I hoped it would turn into something much bigger. Lots of bets! Lots of money on table! Secondary bets! Maybe even options! That was definitely not the case. It has stayed niche, and so have prediction markets generally. I think there are good reasons for that.
Checked it out. I think this is a great idea! Why do think it's stayed niche? There IS real money at stake, albeit philanthropic donations.
My personal feeling: people do not want accountability, and to paraphrase Jack Nicholson in A Few Good Men, they can't handle the truth. I think that's the real reason the Google prediction markets died: managers prefer to do what they want to do, regardless of whether it's going to work or not. Perhaps someone can theorize as to why that's a good thing? I can't.
That is definitely a key component. Getting an actual bet requires two very dedicated and thoughtful people to negotiate terms and commit to possibly looking foolish down the road.
And I think you're also right about what managers want. I was very involved in the Lean Startup movement, which had a few years en vogue a while back. Core to it was being very disciplined about stating and testing hypotheses as quickly as possible. Used right, it can work extremely well.
It got some uptake in startups, but not as much as I expected. And even less uptake in more established companies, although the methods work well there too. Why? My take is that it's not just about managerial desire, although that's important, but also managerial status. For rising through the ranks it's much more effective to talk a big game than a modest and humble one, even though the latter traits are more likely to lead to success.
I hate it, of course. But we can look at examples like Theranos, Uber, and WeWork. None of them ever turned a profit, and it's possible none of them ever will produce a net return. The first CEO got caught eventually and might face some consequences one year or another. But the other two ended up incredibly rich. Who am I to say that bullshit doesn't pay?
Robin has been at this for over 15 years. Some reasons why he failed in the early years were that he used government money for uncomfortable bets about assassinations of US politicians.
I just can't trust a social scientist personally telling me that something offers better predictions, because social science has lost my trust in it's methodology. I'd need to see a data scientist, mathematician or statistician arguing that it does for me to start considering it more.
The good thing though is, he could simply setup a decision market to have people decide the true value of decision markets no?
> The markets didn't take off, they withered. AFAIK they don't have them anymore.
Actually there is a new prediction market at Google now called Gleangen. It seems to be fairly active and flourishing despite not using real $$. Although there is a leaderboard, so perhaps the desire to want to rank in the top 10 is what motivates people to place bets.
Both. It could be something mundane like "Fed raises interest rates in 2020" or something company specific like "Nextgen pixel phone display size is > x inches". Generally the company stuff tends not to be the real sensitive stuff and tends to be more about fun/mundane topics.
I'm not sure this is so clear. Google's first prediction market had 1,463 traders (>10% of the company at the time) and produced >250k predictions (though >50% of those were from trading bots).
Commenters saying: "but someone can influence the market" should at least listen to the podcast (and really should also) familiarize themselves with the existing literature.
Do you think this critique has never been considered? If you disagree with the reasons why this is not a problem, go ahead and critique those directly.
Please don't reflexively pooh-pooh an idea you don't really understand.
I haven't seen Hanson address the "destructive insider trading" scenario where I buy a prediction that California air quality will be bad, and then spend the summer starting wildfires in California forests.
Today, people have no incentive to do that other than "wanting to watch the world burn". But if you can get rich burning forests, more forests will burn.
I don't doubt Hanson has thought about this. I just haven't seen it addressed.
My guess is that if a particular prediction were really so easy (low cost and risk) to manipulate then you wouldn't expect much trade volume. As an easier example, you wouldn't expect many people to bet money that no one will tweet a cartoon of a cat wearing a pink hat today, because it would be obvious to everyone that it would be trivial for an insider to manipulate this outcome.
As for your specific example, starting wildfires is an extremely serious crime that I expect the authorities would spend significant resources investigating, so I don't know how bettors would estimate the ease of manipulation for that market. Even if the prediction market were implemented in a perfectly anonymous system where you're guaranteed to be able to collect (and spend) your money without authorities knowing you were the one who profited from that prediction market, it seems like you have a pretty high risk of getting caught simply for the act of arson itself (not to mention the "risk" of firefighters controlling the fires that you start).
I'm also not sure how many people there are out there who refrain from starting wildfires only because there is no direct financial incentive, and not for other reasons (like not wanting to destroy massive amounts of ecosystem and property and potentially kill many people).
Air quality is something that is hard to predict and really quite important for many reasons.
I think you underestimate organized crime. Starting wildfires is easy and skillful criminals won't get caught. The people that pay them take on an even slimmer risk.
I'm not estimating organized crime at all, because I'm very ignorant of it. I'm only saying that I would expect the potential bettors in that prediction market to make some estimation of how difficult manipulating the market would be, and not many people would bet on a market they deemed to be easily manipulable. Another mechanism that could exist is that the organizer of the prediction market could itself judge whether a particular market was manipulated and have policies for what to do in such a situation.
But in reality, that just doesn't and didn't historically happen in stock markets. Buyers didn't accurately estimate the likelihood of manipulation. The government had to step in.
I think you're being uncharitable towards the critiques. In particular Hanson doesn't actually address the manipulation most of these critiques are talking about. See e.g. https://news.ycombinator.com/item?id=28541243
Moreover even among futarchy proponents, many do not advocate for it as the dominant form of government, but only one form for certain operations. Hanson's position is on the more extreme edge here.
The simple reason you can't manipulate the market for profit is that everyone else can make the same bet you are making, quickly reducing the profit since you have the added expense of manipulating the market.
If a person can manipulate the market without expense, then they can do it without relying on the profit and the market isn't the problem. It will, however, allow them to bet on the market, providing a good indicator.
Imagine that there's a market for which day a politician will die on. Initially, all of these daily contracts are priced very low because there is a very low chance of a politician dying on any given day. So, you buy quite a few of one particular day, and then go kill the politician.
Last year a man took a large bet that there would be a streaker at the Super Bowl, then went to the Super Bowl and streaked. His winnings far exceeded the fine and the cost of the ticket.
My fear about a betting market for policy is that you’d have a bunch of rich/powerful people betting GDP will go down and then doing everything in their power to stunt economic growth.
Yes in the sense that for a bet like streaking the chance that you can affect the outcome is 100%.
The chance that you can short a "broad range of stocks" and affect their share price is alot less certain. Especially in liquid markets your chance of affecting a blue chips stock price is very small.
There are always a few cases, like Archageos, where you can get so big that you move the stock but those cases are so well known exactly because they are so rare.
It isn't a problem as long as no one is betting on whether or not "bad" things will happen. However, the things that people care about (or find interesting) – and want to bet on – are often heavily debated as being good or bad. This means that naive prediction markets will probably incentive some people to work towards outcomes that many view as bad.
Has there actually been any empirical proof that betting markets yield better predictions though? From what I read, it doesn't seem so, seems that people still aren't sure if they offer better predictions or not than alternatives, and since it isn't really obvious if they do or don't, it seems ideological if you believe they do or don't and therefore support them or not.
Does anyone know if things have changed on that front?
A derivative (a contract which is representative of some good, right, entitlement) based on the outcome of an event occurring or not.
Event derivatives are all around us. You buying fire insurance grants you the right to sell, to an insurance writer, a smoldering pile of wood (worth $2) at a price of say 30% of the unburned value of your home (similar to an OTM put option). An accidental-death insurance contract being worth a negligible amount until the subjects death.
What people often get wrong about EDs is
1. Whether or not they accurately predict events: they don't; they reflect the price, or odds, at which a market is willing to swap exposure. e.g. One pays a insurance costs because it's more profitable to swap risk than deal with exposure.
2. Confusion around incentives: Yes, each party swapping exposure will change their behavior -- which is a feature, not a bug. A company buying fire insurance can now enter into commerce otherwise prohibitive unlocked by swapping fire risk with an insurance writing company who has incentives to prevent a fire risk -- and do so at scale, coordinating multiple parties.
3. Lack of information about unfettered demand for the products. People claim there's no demand for EDs but neglect to take into account regulations: prevent people from purchasing them freely, political manipulation of prices when buyers are unhappy with the market price for risk, regulatory capture creating anticompetative producer protections, observe that because because of bans the inferior counts some EDs have been forced into are insufficient.
The global policy failure of covid has been around risk pricing, risk-exposure swapping, and effective, at-scale incentive coordination.
An exercise: An assassination market opens for the price of your head. In which case(s) should you be most concerned for your life:
[A] Exposure available at .99 on the 1
[B] Exposure available at .01 on the 1
[C] Exposure available at .5 on the 1
@HarryDCrane on Twitter is an applied researcher in this area, read him if you're interested in more -- I know I have.
So ... there is a market in killing me. At 99c on the dollar means (I think) that if I lay a bet that someone will assassinate me (by xmas) I sill have to pay 99c to get a dollar payout.
At 1c to get 1 dollar - That to me implies no-one thinks I will be killed (either I live in the Oval Office so it's very hard or frankly no-one wants to waste the bullet)
At 99c to get 1 dollar it's a near certainty. I am already tied up in a basement somewhere.
This is the discussion around prices/exposure swapping/speculation I wanted to occur.
If someone has the ability to buy yes exposure at 1 cent, subsequently kill me, and then collect 100 cents that's pretty high pay out odds. The potential hitman has 99 cents of margin to use to kill me and still be profitable.
If the hitman buys at 99 cents, he looses the majority of the bet if he doesn't kill me. The person who took the other side has significant margins to protect me (from their prospective they bought no at 1 cent).
at 50 cents we both loose or gain the same amount of money.
I think I'd be more concerned for my safety in case A,B than in case C.
The other dimension of derivatives is how much "Open Interest" (OI) -- or the quantity of contracts/exposure -- that exists for a contract. I think I'd be concerned for my safety if there was any appreciable "yes" OI for the contract that wasn't me. So my strategy would be to buy up all "yes" contracts -- e.g. the people who want exposure to me dying or believe I will die -- so no one has an incentive to kill me.
There's an episode of Mike Tyson's podcast where he talks to a former Mafia boss about how easy it is to get athletes who are in debt to Mafia bookies to throw games as payment for their debts.
Prediction markets bring the magic of perverse incentives to all walks of life.
Agreed, this post completely misses how the power dynamics of wealth affect the opinions and 'rationality' of players. It also never mentions how a whole section of society would be unable to take part in this system due to their socioeconomic status.
That's an interesting question. Markets facilitate the emergence of inventive structures, and some of but not all of those are definitely misaligned with the well being of many, many people. Look at critiques of GDP as the primary measure of national economic success. Look at fossil fuel and agriculture subsidies that are literally feeding climate change. Look at the way ad tech markets have infringed on democratic processes, on privacy, on peoples' mental health. These are glaring examples of what you're talking about.
Tell me how it's at all okay that the same banks who are dabbling long on water futures are also invested in building a tar sands pipeline across the Mississippi river and straight into the Lake Superior watershed. Conflict if interest is an understatement. These organizations act like total automatons and are practically beyond control except through inventive systems. If we just let them go on with this behavior vaguely in the name of "markets", I think that's dangerous. We're missing a collective opportunity to say "no, that's not how we want the incentives to align and it's time to adjust them".
If I was gonna generalize, I think I'd prefer more to get generally frustrated with uncritical application of technologies of incentivization, especially with games that so clearly tempt injustice, yet that powerful people rationalize regardless as being the only viable way to go.
Maybe they are hedging pipeline failure and clean up costs with water profit...
This exchange was nice.
What are alternatives to markets? Government direction of all economic developments? AI overlord telling everyone what to build and exchange?
What if I want wheat bagels for my labor and we're locally optimized for pumpernickel?
They don't have to be corrupt at all, just the simple fact that media and lobbying power are mostly controlled by capital expended it would be naive to assume that non-corrupt politicians can not be influenced.
If they could not then there would not be a lobby, and media would not be as politicized as they are.
In my opinion, a politician has some freedom to choose who they listen to, and politicians who predicate that choice on their personal enrichment are corrupt.
But that's a black-and-white view. The 'gray' version is that it is perfectly possible for a politician to be of good intentions, but presented with a choice of 99 parties funded by special interests with megaphones and one lonely voter you can't fault them for having more - and usually better argued - input from special interests.
That's why many countries forbid lobbying entirely (not that that's 100% effective, but it's a start).
Judiciary laws are just one of the many incentives present in the real world. Money is a better data point to base models and predictions than laws.
> What if improving society was a reward-in-itself and we trusted competent managers to do the complicated bits?
This is the mental model many citizen seem to adopt to interpret their government. In my opinion it's closer to a religion than reality. People that willingly reject great personal rewards in favor of the common good are a rarity. They do exist, but expecting people in power to naturally adopt that stance, or expecting that elections is a good way of finding those people, is a mistake.
> Money is a better data point to base models and predictions than laws.
I completely disagree. Not all value is as fungible as money, because economic value is rooted in the diverse & time-sensitive needs of individuals and societies.
My demand for "potable water-value today" is not freely exchangeable, via a universal value medium such as money, with your demand for "quiet sleepy-time value next week", even though these are things we are economically acquiring. Regulations are required to prevent uncontrolled financialization from cannibalizing society.
Can you elaborate on that? It seems to me that in most of today's capitalistic societies these goods/services are totally interchangeable. If I'm rich and willing to pay for quiet sleepy time next week, I will get it. If I'm poor and can't pay for potable water today, I won't get it. It doesn't matter how relatively important to each individual the service is.
> And if that individual doesn't get their $0.25 of water, they die.
And in reality, people usually are willing to steal for things when they are desperate for them (which markets may or may not price in thru higher prices) before they are just willing to die.
> How is that a reasonable outcome in any functioning society?
I think the issue here is that I don't think we as people can all agree as to what is reasonable, or what "functioning society" actually means, though we are much better at coming up in aggregate with states that are possible in a given society (despite how unlikely some of those states are). In this, society is comprised of a combination of more or less likely "reasonable" and "unreasonable" states.
> There is no law that says you have to be a corrupt politician and enable plutarchy, although it is common.
There's no such law, of course, but there's also no law that a restaurant must offer food and service that customers like, and yet we tend to observe that restaurants which do that tend to survive better than those which do not.
Hanson says regarding national metrics: 'Well, now we’re going to authorize this same sort of agency to estimate number like GDP except we’re going to tell them to put more things in the number. Bills before Congress would say, “Count more trees, and count leisure, and count international reputation.” They would just make a bigger formula that included all the stuff they cared about in this measure of national welfare.'
Here's the tricky part. It seems to me that the metrics which you don't choose to bundle into the aggregate measurement will get annihilated (optimized out) at the expense of those which you do choose to bundle. And the decisions as to which things you do bundle and with what weight represent ethical, moral, spiritual, aesthetic, and otherwise intangible judgments which are (unsurprisingly) difficult to quantify or even come to basic agreement on. He weasels a bit by using relatively uncontroversial examples ("more trees", "leisure") and by handwaving ("authorize": how?, "sort of agency": what sort?), but it's trivial to imagine metrics which produce wild disagreement regarding the magnitude or even sign of the weight to be applied.
(Shadows of paperclip maximization loom.)
At that point, I'm not sure where we've significantly improved things since we're still left with the problem of how to choose the people who decide which metrics go into the aggregate measure and at what weight. Can we make a market on that, too? At that point we're in some crazy recursion and I get lost. I'm deeply skeptical.
The big issue I feel like he underrates is the massive incentive to take control of the agency that's supposed to be doing objective assessments. These are now effectively the most powerful people in the country. Even if they start off as saints the political incentive is going to be to find ways to influence it. And once that agency is captured then you have a dictatorship in all but name.
I feel like this reflects a common problem in political theorising, of coming up with an ideal institutional structure without thinking about the incentives around it and how it need to be sustained.
Historical analogues would be how originally non partisan district drawing processes are politicised, the politicisation of science and medicine, or soviet or Chinese gdp figures. The simplest solution is always to just rig the game.
His usual answer is "vote on values, bet on beliefs." So we would just vote on what to include, and then bet (through prediction markets) on how to achieve these goals.
It seems to me that many of these "rationalize all the things" schemes tend to bottom out at and build onto an "incorruptible kernel of truth" which, if corrupted, causes the whole thing to collapse (or worse: have a veneer of impartial truth while being secretly corrupt). It's a sort of microkernel approach to government.
With Hanson's futarchy, it's the hypothetical mechanism for "voting on values" in order to choose what does and doesn't go in the almighty bundle of metrics (and who gets to measure the result: they have the real power). If that can be corrupted then the whole thing falls apart. Yarvin's "neocameralism" has a "cryptographic decision and command chain" which everything else rides on (https://www.unqualified-reservations.org/2008/05/ol6-lost-th...).
This "all the eggs, one basket" approach seems fragile and ripe for subversion, especially when having fallible humans administer it. It seems more resilient to avoid concentrations of this kind of power.
Who determines what is on the ballot, or how unstructured ballot items get classified? Are "make college free" and "make university free" the same or different policy proposals?
To me it just sounds like a dystopia. Like: "market forces are so fun, let's have more of that!" It is wishing for a giant system that has a mind of its own and can't be reasoned about or controlled but affects everyone's life.
I suppose I already think that most of wall street is not, in fact, making anything much more efficient at all, and is instead a massive state-run casino. So maybe I am not the target audience.
Suppose that worker cooperatives have taken over the world.
Suppose then that the worker cooperatives must choose between policy options, for example { "increase payments to retired workers", "increase investment in disaster preparedness", "increase investment in infectious disease biochemistry" }.
What is your recommendation for how the worker cooperatives should decide how to allocate resources? Presuming you do not recommend prediction markets.
There are many possible ways for the individual members to decide to vote, in the end it ends with one person one vote.
Otherwise the tools we have available are extremely powerful especially when fed good data that is freely available, which is what would be possible if you achieved the (perhaps impossible task) of switching most of the economy to worker coops.
Do you see tyranny-of-the-majority and factional-rule as evils to be mitigated, or as desirable reflections-of-worker-cooperator-sentiment, regardless of consequences?
You can always change workplaces. Yes they are evils to be mitigated, but workplaces are not countries, you can leave and create them quite easily especially when you're entitled to part of the capital of your workplace.
I don't think it is impossible to shift the labour-intensive, non-capital-intensive part of the economy (which is quite a lot) to worker coops. They should be more attractive to both employees and customers than sad outfits with private equity vampire squids wrapped around them.
Options and equites markets have a similar risk called pin risk where an underlying stocks price will tend to stick close to a strike price with alot of open interest.
> Pinning refers to the potential for institutional option buyers to manipulate price action in the underlying as options expiration approaches. If these option buyers face the potential for a total loss of the option, they may try to pin the stock to a price just in the money by strategically entering buy orders at the last minute before the close
You end up with this tail wagging the dog model where options end up moving the underlying as the option seller/writer doesn't know if they will be called on the option so they have to acquire it to hedge out their risk just in case.
Similarly the option buyer doesn't know if they will be assigned so they tend to short the stock to, again, hedge out their risk so the stock experiences both buy and sell pressure around the strike price due to bets placed on it.
>Similarly the option buyer doesn't know if they will be assigned
Don't you mean option seller/writer? If you buy an option (and therefore are an owner), you have the right but not the obligation to exercise it.
>You end up with this tail wagging the dog model where options end up moving the underlying as the option seller/writer doesn't know if they will be called on the option so they have to acquire it to hedge out their risk just in case.
Right, but that's not any different than any other sort of leverage? eg. buying stocks on margin to drive up the price.
> Don't you mean option seller/writer? If you buy an option (and therefore are an owner), you have the right but not the obligation to exercise it.
No, meant the buyer, alot of professionals want their exposure to be hedged out so they don't want to end up owning any shares after expiry so they need to be short shares for any calls that will end up in the money. So as an option pins the underlier close to the strike price the option holder ends up buy/selling to hedge out their delta exposure to zero.
> Right, but that's not any different than any other sort of leverage? eg. buying stocks on margin to drive up the price.
Not really,
The tail waiving the dog effect occurs when a derivative instrument that is priced off an underlier actually ends up moving the underlier itself.
Leverage is a different animal. If you lever up 2x you're just buying twice as much. When you unwind you you sell 2x as much. Its still the underlying moving itself and not being affected by any other instrument.
An interesting case study is the Nenana Ice Classic[0]. There is existing research[1] on the accuracy of the betting pool as a prediction market, but to me it looks like outcomes farther from the median result are not predicted well.
And now for something completely different: prediction markets and patents!
I had the idea, back in 2004, of a prediction market for "patent NNN will be invalidated." The idea is that this creates a financial incentive to build the case, usually with prior art, to invalidate garbage software patents. Because most software patents ARE invalid.
I actually pursued this at Google for some period. The implementation details are kinda prohibitive, though:
must ALL the claims be invalidated, or just some?
can the patentee amend his claims to avoid the invalidity ruling?
This might be a dumb question, since my understanding is limited to the article, but here goes.
Let's say we have the situation in the diagram, we have a two policy options, "A" and "B", and we're betting on "if we adopt this policy, will the GDP reach the target?". Let's say that currently A is unpopular, so "yes if A" has a low price, and B is popular. Now imagine an oracle enters the market, who knows that A actually has the best chance of reaching the target, but B also has a good shot. In fact, they know that "yes on A" and "yes on B" are both underpriced - just "yes on A" moreso. Ideally, we would like this oracle to put their money on "yes on A". But if they don't have enough capital to change which market leads, they'll break even (by having their money returned when B wins) if they do that. Instead, they should bet on B, which they know to be a worse option, because it'll actually fire and they'll still make some profit. The market doesn't get to learn all the information that the oracle has.
Is there some way to structure the markets so that our oracle instead bets on A?
One improvement might be to make the total capital that investors have access to into public information.
That way the market should judge bets based not on their absolute value, but on the degree of risk that the bettor is willing to take on. An oracle betting 100% of their capital on A should be treated as a maximally strong signal regardless of the size of said capital. Now, if even that signal isn't enough to move investors away from B... well, you can't really stop them without turning the system into an aristocracy.
Of course, ensuring that the total capital information is accurate is going to be complex - how do you prevent rich people from creating 'proxy investors' who bet 100% of their borrowed funds? - but it seems on a similar order of magnitude of difficulty as 'prevent people from insider trading'.
Unless the two choices are extremely close, we probably don't want it to be the case that a single investor, even with 100% of their funds, can single-handedly change the outcome. We just want each investor to be incentivized to put their money behind what they believe to be the best policy, and hope that collectively they choose right.
Thus, we should expect that even if our oracle knows with certainty that A is the best policy, and invests all their money into it, they're unlikely to be able to unilaterally change which policy will actually be implemented. They will therefore be incentivized to bet on B succeeding (which is also a winning bet), and now that we're revealing that they bet 100% of their capital on it, all we've done is magnified the signal of that bet. But this is bad - our oracle is betting in a way that moves us away from the policy they know is best.
Most of the prediction markets I've seen use sets of binary options that are complete an mutually exclusive. The entity that handles the market would only sell you for, say, $1, a complete set of those binary options.
So you could not buy "A", you could buy a pair made of a copy of "A" and a copy of "not A". If the oracle knows that "A" is true with p ~ 1, they would know that "A" is underpriced (it should be ~$1) and "not A" is overpriced (it should be ~$0), so they would buy ~infinitely many pairs of "A" and "not A".
I don't think that matches what they're describing in the article. The problem is that we only get to try implementing one of policy A and policy B, so if someone bets "I think policy A will achieve the goal", but we implement policy B, you have to just void their bet.
If we had already decided on policy A, and were just trying to predict whether it'll work, what you describe would be fine. But in the article, we're trying to decide whether to implement policy A or policy B, by having two separate markets, one for "what will happen if we do A" and another for "what will happen if we do B", and one of those two will get voided.
Unless I unnderstood it wrong, you could use a variant of that:
Market 1 has options A, not A. Market 2 has options B, not B. At the end of the trading period, void the "losing" market and reward the winning one. It's trivial to implement if you're using e.g. electronic payments and you forbid "cross" trading between A and B.
Right, that's the system they propose. But I'm saying that can result in an agent being incentivized to put their money into a policy they believe to be worse, as long as they believe that policy is underpriced and more likely to "win", which is undesirable.
So for example, suppose we have the objective "increase our production of paperclips by next year". Our two policy options are "build a paperclip factory", and "build a paper mill". We now have two bettings markets, each with a Yes/No pair of options, "Will building a paperclip factory increase our production of paperclips?" and "Will building a paper mill increase our production of paperclips?".
Now let's say that currently, the paper mill has "Yes, this will work" at 60%, and the factory has "Yes, this will work at 40%". I'm a paperclip genius, and I know that the true odds are that the factory has a 90% chance of working, and the mill has a 75% chance of working.
Where do I put my money? Ostensibly, we want me to put it on the factory, because that's the best policy. But the factory is unpopular and that policy is unlikely to be implemented (since it's down by 20 percentage points). Even if I nudge it up a bit, my bet is likely to be voided, and I make zero return for my knowledge. Instead, I will bet on "yes the mill will work", because that market is also underpriced, and the policy will actually be implemented. By doing this, I maximize my expected reward, and I also move us away from what I think is the best policy.
> But the factory is unpopular and that policy is unlikely to be implemented (since it's down by 20 percentage points). Even if I nudge it up a bit, my bet is likely to be voided, and I make zero return for my knowledge.
I'm not sure that's what would actually happen unless you add some weird constraints. Under usual (unrealistic, okay, but just for the sake of argument) assumptions, you would buy infinitely many As at any price <.9, and infinitely many Bs at any price <.75. By definition you know the true odds, so your posterior predictive has zero hyperparameter variance: every single one of those trades has positive expectation.
Both A and B would increase in price, but you would stop buying B after a while. Assuming infinite time, infinite liquidity, no budget constraints and no weird information asymmetries, you could single-handedly make the market converge at their "true" values: you will always buy A if the price is lower than your threshold, and any rational seller who doesn't believe your odds would sell it to you.
Certainly that's true if I have infinite capital, but if these markets require participants to have infinite capital in order to work, we've got a problem. If I have finite capital, then any money I put on the factory is money I can't put on the mill, and that's losing value.
Actually, if the options are mutually exclusive and bets on the losing option get voided, there's no reason to forbid you from betting on both options with the same money, is there? Only one bet will stand.
For that matter, anyone could safely lend you X, where X is what you already bet on A, for the purpose of betting on B. One way or another you'll get X back in voided bet money, so you're a perfectly safe borrower.
Typically in a betting market, you can continue to buy and sell your shares after placing your bet, so if the market moves and you now think that A is overpriced, you can sell some of your shares and lock in profit. It's not entirely clear to me how you make this work if your investment in A and B is with mirrored funds. If there's a way to make it work, it certainly seems like a step in the right direction.
If you have a budget constraint, it's rational for you to buy argmax(true(A) - market_value(A), true(B) - market_value(B)), which is exactly the Pareto-efficient behavior.
That's where I disagree. Your expected value on buying A is (probability A is implemented) * (true(A) - market_value(A), and similarly for B, because your receive zero return if the thing you bet on is not implemented. Thus, even if A is badly mispriced, you may not want to buy it if it has very low probability of being implemented.
The only confident bet I feel like I can make about the future is that most of my bets about the future would be wrong. I don’t think this is an uncommon feeling amongst humans, and perhaps indicates why prediction markets haven’t caught on beyond a core niche.
If I recall correctly, this is the same man who said:
'I’m not a medical professional, so I can’t speak much to medical solutions.'
and in spite of this, proceeded to suggest we intentionally infect people at the outset of the coronavirus.
Claiming, falsely:
'As of yesterday total known deaths were 1384, a number that’s had a six day doubling time lately. At that rate, in four months deaths go up by a factor of a million, which is basically the whole planet. So unless growth rates slow by over a factor of four, there’s probably not time for a vaccine to save us.'
You'll have to forgive me if I don't take any of his claims about the future organization of society seriously, as he seems to lack something critical ever time he tries to imagine how people other than him might behave.
Also this:
'To me as a youth, I think the theory that many men want to have sex with men looked like a conspiracy theory: implausible, with no direct evidence shown, & the sort of thing people would want to claim even if not true. I now believe, though I've never seen very direct evidence.'
The numbers on variolation (the intentional infection he was talking about) still look like a pretty clear net positive. Were variolation trials not so legally problematic, they would likely have saved many, many lives, because as Hanson pointed out very early on the amount of exposure seems to be one of the larger factors in disease severity with COVID-19 (which was very predictable). And for most of his futarchy claims, he's relying directly on fairly good evidence. That's not a good reason to overhaul society in its image, but neither is it a good reason to dismiss the ideas.
I'm the first to agree that Hanson has pretty big blind spots, and he's about as non-neurotypical as they come (and therefore seems to often fall into a trap of terrible assumptions about how others will behave), but this is an awful example to use.
'Were variolation trials not so legally problematic'
It's not like they weren't going to study this, but studying it in a way that doesn't violate the core tenants of medical ethics took longer than vaccine development.
Indeed, I see that it has continued to be studied into the present year, but results are not unilaterally good, especially regarding secondary cases.
"they would likely have saved many, many lives"
This statement is loaded with unfounded assumptions. Governments and epidemiologists, if even if they'd agreed with Hanson, might have had a tough time convincing the population to go to their doctors to be infected with Covid-lite.
There are dozens of interventions at every point along the way that might have saved many many lives.
What I find most inexcusable is Hanson, a self-styled numbers guy, justifying his urgency with a completely unfounded extrapolation.
'At that rate, in four months deaths go up by a factor of a million, which is basically the whole planet'.
This is a completely, laughably unserious analysis.
As to the 'futarchy', I think other comments here point out the flaws in any such system better than I could.
Prediction markets will exacerbate the "war on truth," and are illegal in the US, thankfully. "Shorting" is frequently accompanied by disinformation campaigns against a target. It's scary to think about what this will lead to.
When there's a lot of money riding on something happening, it tends to happen. Sports and traditional financial markets have been dealing with this problem since forever.