>Assuming people know about the assistant, the market will give at least an 80% probability of reinstatement to everything, regardless of how bad it is. (You are guaranteed to make money over time by blindly taking any bet with odds lower than 80%. So people will always bid the odds up to 80% or higher.)
Ironically, this is wrong for the same reason as the point you're making. There's adverse selection. Let's say traders happen to have a close to perfect Scott oracle, so they will be willing to sell you shares of the ones (they think) that Scott would reinstate only at 99%, but will be willing to sell ones (they think) he won't at 50%.
The assistant is calibrated but doesn't have that good of an oracle. Out of every 5 cases he sents Scott, 4 were priced at 99% and get accepted, and the last was priced at 50% and gets rejected. You lose all of your money with a 0% hit rate.
I think I see what you're saying: I need to assume not just that the assistant is well-calibrated but rather that they have access to the "true" probabilities, so every individual case they mark as above 80% really is above 80%.
I wouldn't frame it that way - probabilities are subjective. You need them to have strictly more information than your counterparties in the market, I think.
I don't think they need *more* information, only that they have access to as much information as people betting in the market, and that they output the most accurate possible probabilities given that information. But I think this is a sort of esoteric argument (I use scare quotes with "true" probability because I don't want to get into the weeds on epistemic vs. aleatory uncertainty etc.)
Your main point that well-calibrated is not enough is definitely correct, so I've updated the text to strike that out and change it to a stronger assumption of "optimal" predictions. Thank you for the correction!
There are more than enough confounding variables, or causal vectors, to break down pure correlations in the real world, which is inconveniently complicated. Which is a problem with prediction markets, esp those which aren't very liquid (ie almost all of them). We also don't know a benchmark of noise for most markets (equity included) which makes relying on % probabilities hard - eg the CEO firing market which is just silly.
Yeah for the same reason I'm skeptical of observational studies I lean towards skepticism of any conditional prediction market that doesn't use random conditioning. I guess it's not beyond the realm of possibility for corporations to accept a 5% chance of randomly firing the CEO to make a prediction market work (but it seems like there's an ~0% chance of people feeling that's worth it for the benefit of accurate prices in that market).
I guess there's a tension where in "small" markets (like reinstating commenters) randomization would be OK but it's hard to get enough liquidity whereas for "big" markets randomization is hard to imagine.
Yes. Also as an example in those two cases (ceo firing and commenter reinstatement) I genuinely don't know why the prediction market is more useful than just like a random poll to say "hey is this ceo a moron?" or "hey should we let joe1946blah back on?". The latter seems to be used/ potentially useful as a way for the main person to rely on others' opinions, but we really don't need a market with live prices and randomisation for most things. Feels a ton of work and I'm not sure why its helpful.
The example that I really think is useful is the "100 random RCTs" one. Totally feasible to randomize which ones to do, plausible that a market would give better info than a survey, and worth the effort even for small improvements in accuracy.
I'm surprised you don't consider "the market decides" to be a solution to this. The original idea of decision markets is that the actions are taken on the basis of market prices, and under this structure causality seems like it might be handled just fine.
I don't have a rigorous proof of this - proof is difficult because decision theories tend to have vague "I know it when I see it" definitions to begin with. However, we can at least see that your objections are answered when the market decides. Suppose that the market prices express expectations E[Y|a] and E[Y|b] for some outcome Y and some pair of options {a, b}. You worry that whether a or b is chosen might be informed by some other events or states of the world which, if they transpired or were known to hold, would modify E[Y|a] and E[Y|b]. But if the choice is determined by the closing price of the market, then there obviously cannot be any events or states of the world that inform the choice but not the closing price.
It's not obvious to me that such markets can successfully integrate all of the available information by the time it closes. The closing price can, in general, reflect information about the world not reflected by the price before closing, and the price before closing is trying to anticipate any such developments. It seems like it usually ought to converge, but I can imagine there might be some way to bake self-reference into the market such that it does not converge. Also, once it becomes clear that one choice is preferred to another, there's little incentive to trade the loser, but this might not be much of a problem in practice. If convergence is a problem, adding some randomisation to the choice might help.
Also, there's always a way to implement "the market decides". Instead of asking P(Emissions|treaty), ask P(Emissions|market advises treaty), and make the market advice = the closing prices. This obviously won't be very helpful if no-one is likely to listen to the market, but again the point is to think about markets that people are likely to listen to.
Excellent article thx. I wrote: https://distbit.xyz/correlation-vs-causation-in-futarchy/ which summarises ideas from here and also makes the observation that the convexity caused by using market prices to make a decision can be used by an attacker to bias a market in favour of a certain action:
"In futarchy markets, new information asymmetrically affects bet values. Negative information can nullify the value of “bad outcome” shares by canceling the proposed action, while positive information increases the value of “good outcome” shares without limit. This asymmetry favors “good outcome” shares, skewing futarchy forecasts. This effect is akin to “convexity” in the context of options.
The magnitude of this effect is a function of the amount of new information expected to come to light before the decision deadline. As a result, an attacker could exploit this to increase the value of “good outcome” shares for their desired action, by selecting an action with outsized uncertainty/expected volatility. One naive way of achieving this is to announce that important information relating to the proposal will be announced at some time over the course of the futarchy market.
The reason the magnitude of the effect is a function of amt of new info expected to come to light, is that new info increases expected volatility, and since “good outcome” shares have positive gamma, they also have positive vega. Hence their price is positively correlated with expected volatility.
The attacker can potentially do this while maintaining plausible deniability, as many legitimate actions naturally have high uncertainty due to e.g. the action’s full implications only being realised during the course of the futarchy market.
Transparent attempts to create uncertainty could be prevented via use of a social backstop mechanism, to filter actions recommended by futarchy, before they are executed."
>Assuming people know about the assistant, the market will give at least an 80% probability of reinstatement to everything, regardless of how bad it is. (You are guaranteed to make money over time by blindly taking any bet with odds lower than 80%. So people will always bid the odds up to 80% or higher.)
Ironically, this is wrong for the same reason as the point you're making. There's adverse selection. Let's say traders happen to have a close to perfect Scott oracle, so they will be willing to sell you shares of the ones (they think) that Scott would reinstate only at 99%, but will be willing to sell ones (they think) he won't at 50%.
The assistant is calibrated but doesn't have that good of an oracle. Out of every 5 cases he sents Scott, 4 were priced at 99% and get accepted, and the last was priced at 50% and gets rejected. You lose all of your money with a 0% hit rate.
I think I see what you're saying: I need to assume not just that the assistant is well-calibrated but rather that they have access to the "true" probabilities, so every individual case they mark as above 80% really is above 80%.
I wouldn't frame it that way - probabilities are subjective. You need them to have strictly more information than your counterparties in the market, I think.
I don't think they need *more* information, only that they have access to as much information as people betting in the market, and that they output the most accurate possible probabilities given that information. But I think this is a sort of esoteric argument (I use scare quotes with "true" probability because I don't want to get into the weeds on epistemic vs. aleatory uncertainty etc.)
Your main point that well-calibrated is not enough is definitely correct, so I've updated the text to strike that out and change it to a stronger assumption of "optimal" predictions. Thank you for the correction!
There are more than enough confounding variables, or causal vectors, to break down pure correlations in the real world, which is inconveniently complicated. Which is a problem with prediction markets, esp those which aren't very liquid (ie almost all of them). We also don't know a benchmark of noise for most markets (equity included) which makes relying on % probabilities hard - eg the CEO firing market which is just silly.
Yeah for the same reason I'm skeptical of observational studies I lean towards skepticism of any conditional prediction market that doesn't use random conditioning. I guess it's not beyond the realm of possibility for corporations to accept a 5% chance of randomly firing the CEO to make a prediction market work (but it seems like there's an ~0% chance of people feeling that's worth it for the benefit of accurate prices in that market).
I guess there's a tension where in "small" markets (like reinstating commenters) randomization would be OK but it's hard to get enough liquidity whereas for "big" markets randomization is hard to imagine.
Yes. Also as an example in those two cases (ceo firing and commenter reinstatement) I genuinely don't know why the prediction market is more useful than just like a random poll to say "hey is this ceo a moron?" or "hey should we let joe1946blah back on?". The latter seems to be used/ potentially useful as a way for the main person to rely on others' opinions, but we really don't need a market with live prices and randomisation for most things. Feels a ton of work and I'm not sure why its helpful.
The example that I really think is useful is the "100 random RCTs" one. Totally feasible to randomize which ones to do, plausible that a market would give better info than a survey, and worth the effort even for small improvements in accuracy.
I want this done! 100 RCTs as an RCT to test the efficiacy of using RCTs is suitably Rube Goldbergian to make me excited.
I'm surprised you don't consider "the market decides" to be a solution to this. The original idea of decision markets is that the actions are taken on the basis of market prices, and under this structure causality seems like it might be handled just fine.
I don't have a rigorous proof of this - proof is difficult because decision theories tend to have vague "I know it when I see it" definitions to begin with. However, we can at least see that your objections are answered when the market decides. Suppose that the market prices express expectations E[Y|a] and E[Y|b] for some outcome Y and some pair of options {a, b}. You worry that whether a or b is chosen might be informed by some other events or states of the world which, if they transpired or were known to hold, would modify E[Y|a] and E[Y|b]. But if the choice is determined by the closing price of the market, then there obviously cannot be any events or states of the world that inform the choice but not the closing price.
It's not obvious to me that such markets can successfully integrate all of the available information by the time it closes. The closing price can, in general, reflect information about the world not reflected by the price before closing, and the price before closing is trying to anticipate any such developments. It seems like it usually ought to converge, but I can imagine there might be some way to bake self-reference into the market such that it does not converge. Also, once it becomes clear that one choice is preferred to another, there's little incentive to trade the loser, but this might not be much of a problem in practice. If convergence is a problem, adding some randomisation to the choice might help.
Also, there's always a way to implement "the market decides". Instead of asking P(Emissions|treaty), ask P(Emissions|market advises treaty), and make the market advice = the closing prices. This obviously won't be very helpful if no-one is likely to listen to the market, but again the point is to think about markets that people are likely to listen to.
Excellent article thx. I wrote: https://distbit.xyz/correlation-vs-causation-in-futarchy/ which summarises ideas from here and also makes the observation that the convexity caused by using market prices to make a decision can be used by an attacker to bias a market in favour of a certain action:
"In futarchy markets, new information asymmetrically affects bet values. Negative information can nullify the value of “bad outcome” shares by canceling the proposed action, while positive information increases the value of “good outcome” shares without limit. This asymmetry favors “good outcome” shares, skewing futarchy forecasts. This effect is akin to “convexity” in the context of options.
The magnitude of this effect is a function of the amount of new information expected to come to light before the decision deadline. As a result, an attacker could exploit this to increase the value of “good outcome” shares for their desired action, by selecting an action with outsized uncertainty/expected volatility. One naive way of achieving this is to announce that important information relating to the proposal will be announced at some time over the course of the futarchy market.
The reason the magnitude of the effect is a function of amt of new info expected to come to light, is that new info increases expected volatility, and since “good outcome” shares have positive gamma, they also have positive vega. Hence their price is positively correlated with expected volatility.
The attacker can potentially do this while maintaining plausible deniability, as many legitimate actions naturally have high uncertainty due to e.g. the action’s full implications only being realised during the course of the futarchy market.
Transparent attempts to create uncertainty could be prevented via use of a social backstop mechanism, to filter actions recommended by futarchy, before they are executed."
> Fortunately, there’s a good (and well-known) alternative, which is to randomize decisions sometimes, at random.
Do you have any references to where this has been discussed before? thx