Key excerpt (But it's worth reading the full thing):
But the real value-add of the model is not just in calculating who’s ahead in the polling average. Rather, it’s in understanding the uncertainties in the data: how accurate polls are in practice, and how these errors are correlated between the states. The final margins on Tuesday were actually quite close to the polling averages in the swing states, though less so in blue states, as I’ll discuss in a moment. But this was more or less a textbook illustration of the normal-sized polling error that we frequently wrote about [paid only; basically says that the polling errors could be correlated be correlated between states]. When polls miss low on Trump in one key state, they probably also will in most or all of the others.
In fact, because polling errors are highly correlated between states — and because Trump was ahead in 5 of the 7 swing states anyway — a Trump sweep of the swing states was actually our most common scenario, occurring in 20 percent of simulations. Following the same logic, the second most common outcome, happening 14 percent of the time, was a Harris swing state sweep.6
[Interactive table]
Relatedly, the final Electoral College tally will be 312 electoral votes for Trump and 226 for Harris. And Trump @ 312 was by far the most common outcome in our simulations, occurring 6 percent of the time. In fact, Trump 312/Harris 226 is the huge spike you see in our electoral vote distribution chart:
[Interactive graph]
The difference between 20 percent (the share of times Trump won all 7 swing states) and 6 percent (his getting exactly 312 electoral votes) is because sometimes, Trump winning all the swing states was part of a complete landslide where he penetrated further into blue territory. Conditional on winning all 7 swing states, for instance, Trump had a 22 percent chance of also winning New Mexico, a 21 percent chance at Minnesota, 19 percent in New Hampshire, 16 percent in Maine, 11 percent in Nebraska’s 2nd Congressional District, and 10 percent in Virginia. Trump won more than 312 electoral votes in 16 percent of our simulations.
But on Tuesday, there weren’t any upsets in the other states. So not only did Trump win with exactly 312 electoral votes, he also won with the exact map that occurred most often in our simulations, counting all 50 states, the District of Columbia and the congressional districts in Nebraska and Maine.
I don't know of an intuitive test for whether a forecast of a non-repeating event was well-reasoned (see, also, the lively debate over the performance of prediction markets), but this is Silver's initial defense of his 50-50 forecast. I'm unconvinced - if the modal outcome of the model was the actual result of the election, does that vindicate its internal correlations, indict its confidence in its output, both, neither... ? But I don't think it's irreconcilable that the model's modal outcome being real vindicates its internal correlations AND that its certainty was limited by the quality of the available data, so this hasn't lowered my opinion of Silver, either.
Jump in the discussion.
No email address required.
Notes -
Selzer was also an oracle up until random number dialing in Iowa stopped working. The shitting on Nate is mean-spirited, but the reality is that polling methods haven't adjusted for Trump and the response bias issue with his supporters - not to mention the way the game has changed every election he's fought. The models were, I'm sure, statistically beautiful, but garbage in garbage out.
Selzer deserves to be knocked down a peg (although I think "totally ignored" as some want it seems excessive). But that's mainly because she was incredibly far off on the final result. The equivalent for Nate would be if he were predicting a Harris +7 PV win or similar.
With hindsight, given that Trump has been underestimated three times in a row now, it seems reasonable to think that polls have likely systematically underestimated Trump. But before we got the results, it was only twice in a row, and so a lot more likely to be a coincidence. Nate has written extensively on how hard is to predict the direction of a polling error, and many have been burnt assuming that one or two polling errors in a row necessarily predict a polling error in the same direction in the next election. And with Trump off the ballot in 2028, we'll be back to square one.
More options
Context Copy link
"Selzer was also an oracle up until random number dialing in Iowa stopped working." I certainly did...back in 2004, when she confidently predicted a Kerry win in Iowa.
Anyne looking at Selzer's methodology should be discounting her on that basis alone.
"...the reality is that polling methods haven't adjusted for Trump and the response bias issue with his supporters..."
This is emphatically not a Trump issue
More options
Context Copy link
More options
Context Copy link