Why I generally don't recommend internal prediction markets or forecasting tournaments to organisations

mwstory.substack.com

Given the success of the Good Judgment Project (where I spent some happy years), the book Superforecasting, the US Intelligence Community Prediction Market and the plethora of other projects exploding out of IARPA’s early investment in forecasting research, I am often asked why more firms and organisations haven’t set up their own internal forecasting projects to harness the benefits of these systems to generate useful information about the future, and why the few who did take the plunge have

Thanks for writing this! I was planning to start a forecasting tournament in my org at work with around 100 people. But from talking to other people, I couldn’t identify if we could get any signal before running but.

I think you’re spot on that it’s just a minority of people who enjoy logging int Metaculus et al. That’s a good reminder why trying to get buy in from a majority about participating in a forecasting tournament is a hard sell.

There is another downside to forecasting tournaments, or any kind of prediction contest which has a significant winner-take-all component. Namely, basing your predictions on the actual probabilities of genuinely uncertain events is not a winning strategy if rewards are mostly for top finishers and if most players have the same information.

Essentially, gambling usually beats accurate modeling unless accuracy is genuinely hard / quite different from a general consensus estimate. The only way to fix this is to make losing proportionally painful (prediction markets handle this well).

Consider a game with a hundred players that involves predicting the probability of outcome for twenty coin flips, using Brier scoring to assign player scores. Rewards for top three finishers or what have you.

An accurate player will assign 50% to each flip and always achieve a score of 5, which is the lowest average score anyone can expect.

Assuming players use game theory, this is very unlikely to win. The Nash equilibrium is complicated but in general most players will end up making incorrect predictions for a significant number of flips with a minority doing the same for a smaller number of flips, leaving only a very small chance (less than 1/P) that the correct prediction will take first place.

The same applies even if players are predicting outcomes rather than probabilities, as it may make sense to predict unlikely-but-possible events just to increase variance at the cost of accuracy, and thereby maximize the chance of being first.

A good reality check. My perspective is that we can use these observations to decide how the best organisations could, if they choose, do better forecasting / prediction markets than that which has come before. https://twitter.com/thatMikeBishop/status/1600568351151083520?s=20&t=Ymthl5tlKxzVAn_AAaYmmA

Thanks for writing this! I was planning to start a forecasting tournament in my org at work with around 100 people. But from talking to other people, I couldn’t identify if we could get any signal before running but.

I think you’re spot on that it’s just a minority of people who enjoy logging int Metaculus et al. That’s a good reminder why trying to get buy in from a majority about participating in a forecasting tournament is a hard sell.

There is another downside to forecasting tournaments, or any kind of prediction contest which has a significant winner-take-all component. Namely, basing your predictions on the actual probabilities of genuinely uncertain events is not a winning strategy if rewards are mostly for top finishers and if most players have the same information.

Essentially, gambling usually beats accurate modeling unless accuracy is genuinely hard / quite different from a general consensus estimate. The only way to fix this is to make losing proportionally painful (prediction markets handle this well).

Consider a game with a hundred players that involves predicting the probability of outcome for twenty coin flips, using Brier scoring to assign player scores. Rewards for top three finishers or what have you.

An accurate player will assign 50% to each flip and always achieve a score of 5, which is the lowest average score anyone can expect.

Assuming players use game theory, this is very unlikely to win. The Nash equilibrium is complicated but in general most players will end up making incorrect predictions for a significant number of flips with a minority doing the same for a smaller number of flips, leaving only a very small chance (less than 1/P) that the correct prediction will take first place.

The same applies even if players are predicting outcomes rather than probabilities, as it may make sense to predict unlikely-but-possible events just to increase variance at the cost of accuracy, and thereby maximize the chance of being first.

Is “there there” a typo? Sorry, it seems Leo and me are grammar police today. I loved the article.

Minor typo ICMB rather than ICBM