Key excerpt (But it's worth reading the full thing):
But the real value-add of the model is not just in calculating who’s ahead in the polling average. Rather, it’s in understanding the uncertainties in the data: how accurate polls are in practice, and how these errors are correlated between the states. The final margins on Tuesday were actually quite close to the polling averages in the swing states, though less so in blue states, as I’ll discuss in a moment. But this was more or less a textbook illustration of the normal-sized polling error that we frequently wrote about [paid only; basically says that the polling errors could be correlated be correlated between states]. When polls miss low on Trump in one key state, they probably also will in most or all of the others.
In fact, because polling errors are highly correlated between states — and because Trump was ahead in 5 of the 7 swing states anyway — a Trump sweep of the swing states was actually our most common scenario, occurring in 20 percent of simulations. Following the same logic, the second most common outcome, happening 14 percent of the time, was a Harris swing state sweep.6
[Interactive table]
Relatedly, the final Electoral College tally will be 312 electoral votes for Trump and 226 for Harris. And Trump @ 312 was by far the most common outcome in our simulations, occurring 6 percent of the time. In fact, Trump 312/Harris 226 is the huge spike you see in our electoral vote distribution chart:
[Interactive graph]
The difference between 20 percent (the share of times Trump won all 7 swing states) and 6 percent (his getting exactly 312 electoral votes) is because sometimes, Trump winning all the swing states was part of a complete landslide where he penetrated further into blue territory. Conditional on winning all 7 swing states, for instance, Trump had a 22 percent chance of also winning New Mexico, a 21 percent chance at Minnesota, 19 percent in New Hampshire, 16 percent in Maine, 11 percent in Nebraska’s 2nd Congressional District, and 10 percent in Virginia. Trump won more than 312 electoral votes in 16 percent of our simulations.
But on Tuesday, there weren’t any upsets in the other states. So not only did Trump win with exactly 312 electoral votes, he also won with the exact map that occurred most often in our simulations, counting all 50 states, the District of Columbia and the congressional districts in Nebraska and Maine.
I don't know of an intuitive test for whether a forecast of a non-repeating event was well-reasoned (see, also, the lively debate over the performance of prediction markets), but this is Silver's initial defense of his 50-50 forecast. I'm unconvinced - if the modal outcome of the model was the actual result of the election, does that vindicate its internal correlations, indict its confidence in its output, both, neither... ? But I don't think it's irreconcilable that the model's modal outcome being real vindicates its internal correlations AND that its certainty was limited by the quality of the available data, so this hasn't lowered my opinion of Silver, either.
Jump in the discussion.
No email address required.
Notes -
Seriously.. Of course he is going to claim credit but he shouldn’t get credit for having his model hedge more than others. But apparently I don’t understand statistics because I think he shouldn’t get credit for hedging more.
Why shouldn't he get credit for hedging his model more than others?
Because absolutely no last-minute polls existed that justified his sudden shift the day prior to Election Day 2016. Nate knew something was wrong with the polling, and put his thumb on the scale to make Trump look better than his model said.
More options
Context Copy link
He should get credit for being well-calibrated. If he is always right with his confident predictions and mostly right with his hedged predictions, then he is doing the right thing.
More options
Context Copy link
It did incentivize him to hedge more and more until we get to this ridiculous point where he tries to take credit for a 50/50 prediction.
The problem is you can't evaluate how well the model did based on just the probability of winning, except for the correct or not question, and he was not correct in 2016. Maybe whatever he is doing in this post is good, but it sure didn't translate to the final prediction, so he doesn't get credit for that either.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link