50 Comments
User's avatar
Tony's avatar

One way to really solve the issue is to use a (team of) superforcaster(s) instead of a market. Then there are no issues, since they don't get any information from the market price.

Alternatively, we can soon use AI, which seems on track to beat superforecasters.

And note that we can keep track of all the forecasts to see if the (team of) superforcaster(s) is accurate, and maybe pay in line with that.

Although it is not guaranteed based on past performance, it seems that superforcasters can very well distinguish P(A|B) from P(A|do(B)), and likely AI will be able as well soon enough.

Expand full comment
dynomight's avatar

If you think about the incentives for your team of superforecasters, it's actually pretty hard to incentivize them to predict P(A|do(B)) rather than P(A|B). If you reward them based on their empirical performance, the same sort of issue arises. (Not sure it's a real problem in practice, though...)

Expand full comment
Tony's avatar

"it's actually pretty hard to incentivize them to predict P(A|do(B)) rather than P(A|B)"

I don't see how.

Lets say payment to the forecasters is proportional to the obtained profits, then their incentives is to discriminante P(A|do(B)) from P(A|do(noB)). And I think any decent forecaster can distinguish P(A|do(B)) from P(A|B).

On top of that, you record the prediction and outcomes and over many predictions give a (smallish*) bonus depending on how close predicitons are to actual outcomes.

If the bonus is large, a possible problem is that the forecasters migh lie and choose outcomes that they can predict more accurately. Eg. if firing Musk leads to profits equal to 3 with certainty, but not firing him leads to a uniform between 4 and 10, forecasters paid on accuracy have an incentive to lie and say profits if fired are 3, if not fired are less than 3.

Expand full comment
dynomight's avatar

> Lets say payment to the forecasters is proportional to the obtained profits, then their incentives is to discriminante P(A|do(B)) from P(A|do(noB))

I don't think so. They'll have incentives to predict P(A|B) or whatever. Unless you're actually choosing the action randomly, there's basically no way to exactly incentivize P(A|do(B)). The situation is pretty much the same as with prediction markets. You can take my final theorem and replace Y with "mean outcome of superforecasters". The same issue still exists.

Expand full comment
Tony's avatar

What the board says is:

"Predict profits one year from now if, right now, we fire Musk, and also if we don't fire him. Since we really want to know P(profits|do(fire Musk)) we give you all our private information -sign this NDA- and we will fire Musk if and only if you determine that E[profits|do(fire Musk now)]> E[profits|do(not fire Musk now)]. Your payment will be 0.1% of the profits we get in 1 year".

Here they really need to try to get E[profits|do(fire Musk now)], and this is very different from E[profits| Musk is fired]. In the latter case, you need to factor in all non-causal effects ( maybe Musk got fired because profits plumeted (reverse causality) or he got crazy or whatever), but not in the former.

Expand full comment
dynomight's avatar

I do understand you, I just really think you're wrong, sorry! :) The superforecasters will predict the outcome conditional on that action being taken, but the action is being taken based on the superforecaster predictions. So they have incentives to predict based on their estimates of P(profits | fire Musk chosen by superforecasters). Since other superforeceasters will in general have extra information, that's different from their true belief of P(profits | do(fire Musk)).

Expand full comment
Tony's avatar

They key here is that the superforecasters act as a group and give a single prediction (for each action). To simplify further, consider a single superforcaster. Then there is no extra information in the prediction, and therefore no problems. Same for an AI (which distinguishes P(A|B) from P(A|do(B))

Expand full comment
Tony's avatar

A comment on the Theorem.

I interpret that Z is what we want to predict conditional on our action, that we take if Y is large enough. E.g. Z=profits increasing, if Y>=50% when we fire Musk.

But then, what we want the market to elicit is E[Z | Y>=c] (will profits increase if we fire Musk), and in your theorem, E1[Z | Y>=c] = E2[Z | Y>=c], so the market works as intended.

(if what we really care about is E[Z | Y>=c] - E[Z] (profit increase is more likely if Musk is fired that if he's not), then we could make a regular prediction market on Z before -ever anouncing- the condtional one (to elicit E[Z]), and then the fancy one to elicity E[Z | Y>=c]; If the info avaible doesn't change much between the 2 markets, it seems this should work.

Expand full comment
dynomight's avatar

The context for the theorem is that you have two distributions over what would happen conditional on taking action 1 or action 2. We'd like the markets to elicit the causal effects of the action, so we'd like the expected payoff for buying a contract in market 1 to just be E1[Z] and the expected payoff for a contract in market 2 to be E2[Z]. The theorem shows there is no reward function f that will make E1[f]=E1[Z] and E2[f]=E2[Z] consistently, thus no way to make people always incentivized to bid their true beliefs.

(There isn't really any concept of E[Z], just different distributions reflecting what happens after each of the two actions, so I can't quite understand your comment.)

Expand full comment
Tony's avatar

OK, I thought P1 and P2 where subjectieve probabilties of 2 people.

Now I guess Z is some outcome (eg. profits), P1 is world where action 1 is taken, and thinks make sense. Thanks!

I see that the issue is that expected payout only elicits expected outcome given market price is high enough, which might differ from expected outcome.

Expand full comment
Tony's avatar

It seems to me there is a rather easy fix: Ask for results conditional on making a decission, and take the decission based on the predition market outcome.

Say the board of Tesla wants to fire Musk if that would increse its stock price.

Today, they set up a smart contract, which will fire Musk if the prediction market (see below) indicates that firing Musk would increase Tesla price. The market closes in 5 days.

The prediction market has 2 components: a) pays 1 dollar if stock price increases after Musk being fired AS A RESULT OF THIS MARKET, and its cancelled if he's not fired AS A RESULT OF THIS MARKET; b) pays 1 dollar if stock price increases after Musk is not fired AS A RESULT OF THIS MARKET and is cancelled if he is fired AS A RESULT OF THIS MARKET.

At market closing, Musk is fired if contract a is cheaper than b.

The problem that remains is market manipulation: If Musk doens't want to be fired, he can buy a lot contract b. But this is a very diferent issue than the one discussed in the post.

Expand full comment
dynomight's avatar

If there's no possibility of Musk being fired except as a result of the market, doesn't that reduce to the case covered here? https://dynomight.net/futarchy/#putting-markets-in-charge-doesnt-work

Expand full comment
Tony's avatar

I did not get the subtlety. I think I do now, thanks!

So the problem is that you you'd be paying something like P(a hapens | we do b AND the market indicates we should do b).

And so it is not trivial whether that elicits true individual probability estimates. It does seem like that elicits inflated probabilities (because the "worlds in which a doesn't happen" mostly don't lead to the bet resolving).

It seems to depend on how many "informed betters" there are. If too few then the uninformed should not deviate too much from their true probs, otherwise their effect overwhelms that of the informed ones. And if there are many informed betters, then maybe those largely determine the price, so final price is never too far from true probabilities? It seems an interesting problem to think about.

Expand full comment
dynomight's avatar

There's actually an experiment running at manifold, where the traders are all trying to figure this out: https://manifold.markets/BoltonBailey/futarchys-fundamental-flaw-the-mark

It's not obvious to me exactly what the bets "should" converge to. (It all gets quite recursive.) But my argument is basically that there's no reason that P(a happens | we do b AND the market indicates we should do b) should be the same as P(a happens | we do b). And unless that's true, then I don't see any reason prices should reflect causal effects.

Expand full comment
Tony's avatar

Interesting. But in this market, no one has any information, so price for A should be 60c, and price for B 59 (or anything lower than that).

That is, if people are rational.

(I've just bet on No B, since its price was >59)

Expand full comment
dynomight's avatar

You should bet however you feel will make you the most profits. But I personally think that coin B has a 59% chance of being revealed to be all heads (hence worth $1) plus also some significant chance of being revealed to be all-tails AND the market price plummets after that is revealed meaning you get your bet back. So unless you think there's a 0% chance that coin A ends up with a higher price after coin B is revealed to be all-tails, the price should be (IMO) higher than 59c.

Expand full comment
Tony's avatar

You're right! I dind't get the "publicly" reveal the "true nature" of coin B before the market closes. This makes it more interesting.

Now the point is whether the market will react enough if coin B turns out to be all tails. At least all those that bet Yes to B should now bet No to B, since they'd make money if the bet goes trough and also recover theyr previous bet if in the end only A goes through.

I guess the market is liquid enough, so betting No to B (as I did) seems rahter stupid xD

Expand full comment
Bolton's avatar

Footnote two doesn't seem like an issue with futarchy to me. If people are finding out whether the coin is all-heads or all-tails and the market price follows that, then it seems like coin B will be activated exactly when it's all-heads, which is what we want.

Expand full comment
H.....'s avatar

The conditional prediction market "coin comes up heads" should be denominated not in dollars but in contracts of whether the coin is flipped at all. i.e. if I want to trade in "coin comes up heads" then I exchange a dollar for a pair of contracts "coin is flipped" and "coin is not flipped". Then I can use fractions of my "coin is flipped" contract to buy "coin comes up heads", paying ~0.50 coin-is-flipped contracts. However this *is* actually now my true belief, because in the worlds where the coin is actually flipped, it must have been the case that the coin's bias was near 50%. Of course the shares of coin-is-flipped — denominated now in dollars — are themselves almost worthless, because no one really believes they'll ever be worth anything. They're worth epsilon, and coin-is-not-flipped is worth almost $1.

In general, contracts for B-conditional-on-A should be denominated in shares of A, not in dollars. Anyone is welcome to acquire shares of A by exchanging $1 for a pair of A and not-A. If A actually happens, then the contracts for B-conditional-on-A transform into shares of $1-if-B-unconditionally, and contracts for not-A are worthless; if A doesn't happen, then all the B-conditional-on-A become worthless and not-A get turned into $1.

If I'm opinionated on B-conditional-on-A but not B-conditional-on-not-A then I just trade with my A contract and hold onto my not-A contract; if not-A ends up happening then I get my dollar back and I haven't taken any risk (besides time cost).

Expand full comment
Throw Fence's avatar

That makes totalt intuitive sense. You've convinced me.

There's still a causal pathway for aspartame to do bad things though, and I still think you're missing it.

Expand full comment
dynomight's avatar

(Struggles to not dedicate entire life to defending the safety of aspartame…)

Expand full comment
Throw Fence's avatar

Dynomight, dear friend, I will force you if you let me.

Is your main argument still that aspartame quickly decomposes before it can do any harm? Because if it is, you're missing the main way it is causal.

Expand full comment
dynomight's avatar

(Struggle intensifies.)

Expand full comment
Throw Fence's avatar

You're not thinking carerully enough! The activated sweet receptors *are causal!* The body dumps insuline in anticipation of sugar that never arrives! There's no reason to think this *doesn't* cause havoc! The body is a complicated system!

Expand full comment
dynomight's avatar

I assure you this is not something I missed. I've of course heard that theory many, many, many times. It's a plausible-sounding causal link. But I think there's essentially no (credible) evidence that it's actually true and in fact there's decent (though not totally conclusive) evidence against it.

> Smeets and colleagues (107) have shown in a randomized crossover study in healthy individuals that there was no cephalic insulin response upon tasting of aspartame,

https://www.frontiersin.org/journals/nutrition/articles/10.3389/fnut.2020.598340/full

Expand full comment
Throw Fence's avatar

Sorry I shouldn't have mentioned insulin specifically, what I mean to say is more the overall point that the metabolic system is very complicated, even much more complicated than the simplistic sweetness->insulin link. I only threw it in there to give a visceral idea of how mere information *could* affect a physical system. But of course the brain and nervous system is more complicated than that and uses this information to regulate a million things.

Therefore I think a good prior to start with is something like 99% certainty that messing with a main input signal like this causes problems. As a baysian, you can do the math yourself to calculate how much evidence you'd need to even get to "50% sure it's safe".

Guess my argument boils down to: priors are off in all of nutritional science and that caused us in the past to do insane things like putting partially hydrogenated oils (trans fats!) in nearly all of the food supply.

Expand full comment
Boring Radical Centrism's avatar

I've been using Manifold for fun/to sharpen myself for about a year now. I think prediction markets have a lot of use cases, but they fail too often to make me a hardcore supporter of futarchy. The biggest problem imo is that there really are just enormous amounts of dumb money, and probably even larger amounts on the real money betting sites than Manifold. What I'd like to see personally is to use prediction markets to create a meritocracy of advisers. Find the individuals who can best assess stuff like "Will this bill hurt the economy" and "Will Russia and Ukraine have a ceasefire by this year" and give them a job in government briefing politicians about stuff. Ideally it'd be a cultural thing too, and people would start only respecting social media discourse-starters who claim that the Demopublicans will throw all straight, white trans immigrants in camps if the discourse-starter has sufficient profit on the prediction markets.

Expand full comment
Bolton's avatar

If anyone is still reading this comment section and is interested in using Manifold to explore some of the ideas in this post, I created a market:

https://manifold.markets/BoltonBailey/futarchys-fundamental-flaw-the-mark

Expand full comment
dynomight's avatar

This is epic. I love it. [Edit: I retract my previous comment!] I think this is a great test and look forward to seeing the results.

Expand full comment
dynomight's avatar

[Edit: I retract this comment!]

Expand full comment
Bolton's avatar

If you come up with something for this, I would be happy to create another market for you to better capture the scenario!

Expand full comment
dynomight's avatar

Great, I'll PM you!

Expand full comment
metachirality's avatar

I think the reason there's a lot of dumb money is because there aren't enough people who are into prediction markets to correct it.

Expand full comment
dynomight's avatar

Why not just use "superforecasters" or similar?

BTW, you think the real-money betting sites are even dumber than manifold? Why hasn't the market eaten that profit?

Expand full comment
Boring Radical Centrism's avatar

Super forecasters would work well too. I just like the idea of markets for keeping an objective record. Maybe even let regular people bet in some government markets and see themselves do worse than the super forecasters.

I'm not sure why the market hasn't eaten all the profit for sure. Maybe because the money being traded is low enough it's not worth serious people getting invested, or the serious people not being certain that an even more serious person with insider information is the reason why a really off market is so off

Expand full comment
Por Poisson's avatar

What do you think of "self-resolving" markets? That way you can ask directly for P(B|do(A))...

https://arxiv.org/pdf/2306.04305

Expand full comment
dynomight's avatar

Haven’t seen that before, but seems like a crazy (and very interesting idea). Thanks for the reference.

Expand full comment
dynomight's avatar

Nope, just a coincidence. (I think they were published within a few minutes of each other.)

Expand full comment
Sol Hando's avatar

This seems like a great opportunity for Alpha. Conditional probability markets will gain traction if there’s interest in betting on future events, conditional on other events, and in places like Polymarket 90% of market participants probably don’t have a good understanding of what they’re trading.

I’m sure an intelligent person could come up with a system that breaks down a conditional probability into its component probabilities, and allow the predictor to bet on each of those sub-probabilities, then superstore it into a final estimate, combined with the time value of money.

For the Musk example, you could first estimate the reasons Musk wouldn’t be CEO in the next year for 90% of the probability distribution (he dies, he gets fired, he gives CEO to someone else but remains TechnoKing, etc.) then estimate whether the stock price would be higher, or lower for each of those cases, based of course on the 12 month options market for Tesla stock (it could be trading very low or high for some reason). Add time value of money, and each of your predictions might be biased or wrong, but the fact you were trying to deal with sub probabilities and not “I think if Musk gets fired Tesla stock will be worth 10% of what it is without his hype” or whatever that’s primarily motivating small markets like these, you might be able to come up with a better probability.

I’m not sure if this would work, and in aggregate it wouldn’t as thousands of market betters each think about their own “Musk isn’t CEO anymore” probability and it averages out, but the fact most people don’t really understand conditional probability markets in the first place makes me think it’s a good candidate for an inefficient market.

Expand full comment
dynomight's avatar

> I’m sure an intelligent person could come up with a system that breaks down a conditional probability into its component probabilities, and allow the predictor to bet on each of those sub-probabilities, then superstore it into a final estimate, combined with the time value of money.

Yes! This is more or less one of the potential solutions I was alluding to at the end. Roughly speaking, you could have an intelligent person make some assumptions and use the do-calculus to construct a set of conditional markets so that causal effects can be gleaned from the whole. I think that will work, at least often, but I need to look at the details more.

Expand full comment
Yair Halberstadt's avatar

This is all true, and I've been meaning to write a post about it for a while.

Nonetheless it's useful to quantify this. In different scenarios, how much does this in practice impact prices?

The question isn't whether prediction markets are perfect, it's whether they're better than the other alternatives we have available.

Expand full comment
dynomight's avatar

Yeah, I'm not trying to argue conditional prediction markets are bad. I think they're good! I'd just like to see more cognizance of this flaw.

Actually, I'm not sure they even need to be better than the alternatives? In general, it's best to combine many sources of information. It's hard to imagine prediction markets are so bad that they deserve *zero* weight.

Expand full comment
Ben's avatar
Jun 12Edited

Wow, I'm halfway through writing two articles: one on a *different* flaw of futarchy (manipulation), and the other on a potential fix to a number of issues with prediction markets (we need to re-invent mixed republics from first principles) so I was pleased to see this!

PS: Also, is there a particular reason you are choosing to spell Futarchy as "Futarky"?

Expand full comment
dynomight's avatar

> PS: Also, is there a particular reason you are choosing to spell Futarchy as "Futarky"?

Yes, I'm particularly an idiot! (Thanks for letting me know.)

Expand full comment
Pjohn's avatar

I had supposed it was some clever oblique reference to Noah's Ark, the Ark of the Covenant, or some other historical Ark-Based Risk-Management System....

Expand full comment