site banner

The State of Forecasting: Dynamics, Challenges, Hopes

forecasting.substack.com

In short…

  • Forecasting platforms and prediction markets are partially making the pie bigger together, and partially undercutting each other.
  • The forecasting ecosystem adjusted after the loss of plentiful FTX money.
  • Dustin Moskovitz’s foundation (Open Philanthropy) is increasing their presence in the forecasting space, but my sense is that chasing its funding can sometimes be a bad move.
  • As AI systems improve, they become more relevant for judgmental forecasting practice.
  • Betting with real money is still frowned upon by the US powers that be–but the US isn’t willing to institute the oversight regime that would keep people from making bets over the internet in practice.
  • Forecasting hasn’t taken over the world yet, but I’m hoping that as people try out different iterations, someone will find a formula to produce lots of value in a way that scales.
5
Jump in the discussion.

No email address required.

One possible solution is that you have people pay to have questions answered, and as part of that payment, they pay people to act as oracles who have good reputations.

Yeah, this was part of how Augur's system worked. Reward people who end up on the 'right' side of a final resolution question consistently AND anyone who is answering the question has to stake some portion of their reputation on the outcome they're judging. Eventually 'bad actors' (who are either malicious or are too stupid to reliably interpret contracts) lose out and the correct/consistent oracles accumulate more wealth so they can have more influence over future resolutions.

It helped settle into an equilibrium where it was usually not worthwhile to try to exploit an apparent ambiguity, while knowing that wealthier oracles will ignore said ambiguity and you'll lose money directly by trying to challenge them.

I've been blown away by how bad otherwise intelligent people are at writing and interpreting resolution criteria.

Yep. There are plenty of bright line rules for resolving ambiguity in legal contracts, and it can be permissible to pull in outside evidence to interpret them, but you have to think about the ENTIRE document in a systematic way, you can't just glance it over and interpret it based on vibes.

And glancing at things and going with your gut is how so, so many humans operate.

The problem is there's always a tradeoff when you try to get as precise as possible with your wording, in that it both makes it harder for laypeople to easily understand what the terms say (and less likely to read it all) and, paradoxically, can open up a greater attack surface because there's more places where ambiguities can arise.

This is where I imagine LLMs would have a role, if they are given a set of 'rules' by which all contracts are to be interpreted, and they can explain the contracts they read to laypeople, and everyone agrees that the AI's interpretation is final, then you at least make it more challenging to play games with the wording.