site banner

Small-Scale Question Sunday for September 3, 2023

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

2
Jump in the discussion.

No email address required.

Whenever I read the "20xx predictions: Calibration results" that Scott Alexander publishes, I'm always struck by how hard it would be to fairly compare differing pundits' prediction results, when any two people are naturally interested in two different sets of questions and yet some questions are much harder than others.

Then I remember that the status quo is not "pundits' predictions are published along with epistemic uncertainty levels but their annual calibration records aren't easy to compare", it's "the 'best' pundits express uncertainty qualitatively and clam up when they're proven wrong, the worst pundits make binary predictions and don't always even change their minds when they're proven wrong".

It's a shame that nobody's publishing calibration records for them. I guess that would be a public good in both the "good for the public" and "economically undersupplied" senses of the phrase. We can't even make it into a club good, since facts aren't copyrightable. Maybe this sort of thing could be supplied by harnessing culture-war-hatred? I'm imagining each pundit's supporters/detractors assiduously adding successful/failed predictions to a shared database, incentivized because they don't want that pundit's evil detractors/supporters to bias the score by only adding the opposite.