NunoSempere
No bio...
User ID: 1101
In short…
- Forecasting platforms and prediction markets are partially making the pie bigger together, and partially undercutting each other.
- The forecasting ecosystem adjusted after the loss of plentiful FTX money.
- Dustin Moskovitz’s foundation (Open Philanthropy) is increasing their presence in the forecasting space, but my sense is that chasing its funding can sometimes be a bad move.
- As AI systems improve, they become more relevant for judgmental forecasting practice.
- Betting with real money is still frowned upon by the US powers that be–but the US isn’t willing to institute the oversight regime that would keep people from making bets over the internet in practice.
- Forecasting hasn’t taken over the world yet, but I’m hoping that as people try out different iterations, someone will find a formula to produce lots of value in a way that scales.
In The American Empire has Alzheimer's, we saw how the US had repeatedly been rebuffing forecasting-style feedback loops that could have prevented their military and policy failures. In A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform, we saw how Open Philanthropy, a large foundation, spent and additional $100M in a cause they no longer thought was optimal. In A Modest Proposal For Animal Charity Evaluators (ACE) (unpublished), we saw how ACE had moved away from quantitative evaluations, reducing their ability to find out which animal charities were best. In External Evaluation of the Effective Altruism Wiki, we saw someone spending his time less than maximally ambitiously. In My experience with a Potemkin Effective Altruism group (unpublished), we saw how an otherwise well-intentioned group of decent people mostly just kept chugging along producing a negligible impact on the world. As for my own personal failures, I just come out of having spent the last couple of years making a bet on ambitious value estimation that flopped in comparison to what it could have been. I could go on.
Those and all other failures could have been avoided if only those involved had just been harder, better, faster, stronger. I like the word "formidable" as a shorthand here.
In this post, I offer some impressionistic, subpar, incomplete speculation about why my civilization, the people around me, and myself are just generally not as formidable as we could maximally be. Why are we not more awesome? Why are we not attaining the heights that might be within our reach?
These hypotheses are salient to me:
- Today's cultural templates and default pipelines don't create formidable humans.
- Other values, like niceness, welcomingness, humility, status, tranquility, stability, job security and comfort trade off against formidability.
- In particular, becoming formidable requires keeping close to the truth, but convenient lies and self-deceptions are too useful as tools to attain other goals.
- Being formidable at a group level might require exceptional leaders, competent organizational structures, or healthy community dynamics, which we don't have.
I'll present these possible root causes, and then suggest possible solutions for each. My preferred course of action would be to attack this bottleneck on all fronts.
Post continued here. I'm posting to The Motte since I really appreciated the high quality comments from here on previous posts.
The linked post seeks to outline why I feel uneasy about high existential risk estimates from AGI (e.g., 80% doom by 2070). When I try to verbalize this, I view considerations like
-
selection effects at the level of which arguments are discovered and distributed
-
community epistemic problems, and
-
increased uncertainty due to chains of reasoning with imperfect concepts
as real and important.
I'd be curious to get perspectives form the people of the Motte, e.g., telling me that I'm the crazy one & so on.
Regards,
Nuño.
Highlights:
-
PredictIt nears its probable demise
-
American Civics Exchange offers political betting to Americans using weird legal loophole
-
Forecasting community member Avraham Eisenberg arrested for $100M+ theft
-
Forecasting Research Institute launches publicly
-
Blogpost suggests that GiveWell use uncertainty, wins $20k
-
Contrarian offers $500k bet, then chickens out
-
Walter Frick writes a resource to introduce journalists to prediction markets
Highlights
-
Nuclear probability estimates spiked and spooked Elon Musk.
-
Council on Strategic Risks hiring for a full-time Strategic Foresight Senior Fellow @ $78k to 114k
-
Markov Chain Monte Carlo Without all the Bullshit: Old blog post delivers on its title.
Highlights:
-
PredictIt vs Kalshi vs CFTC saga continues
-
Future Fund announces $1M+ prize for arguments which shift their probabilities about AI timelines and dangers
-
Dan Luu looks at the track record of futurists
Highlights
-
CFTC asking for public comments about allowing Kalshi to phagocytize PredictIt’s niche
-
$25k tournament by Richard Hanania on Manifold Markets.
-
pastcasting.com allows users to forecast on already resolved questions with unknown resolutions which hopefully results in faster feedback loops and faster learning
-
Hedgehog Markets now have automatic market-maker-based markets
-
Jonas Moss looks at updating just on the passage of time
I'd prefer comments or questions here on account of themotte.org site being pretty young. Long live The Motte!
- Prev
- Next