site banner

Nate Silver: The model exactly predicted the most likely election map

natesilver.net

Key excerpt (But it's worth reading the full thing):

But the real value-add of the model is not just in calculating who’s ahead in the polling average. Rather, it’s in understanding the uncertainties in the data: how accurate polls are in practice, and how these errors are correlated between the states. The final margins on Tuesday were actually quite close to the polling averages in the swing states, though less so in blue states, as I’ll discuss in a moment. But this was more or less a textbook illustration of the normal-sized polling error that we frequently wrote about [paid only; basically says that the polling errors could be correlated be correlated between states]. When polls miss low on Trump in one key state, they probably also will in most or all of the others.

In fact, because polling errors are highly correlated between states — and because Trump was ahead in 5 of the 7 swing states anyway — a Trump sweep of the swing states was actually our most common scenario, occurring in 20 percent of simulations. Following the same logic, the second most common outcome, happening 14 percent of the time, was a Harris swing state sweep.6

[Interactive table]

Relatedly, the final Electoral College tally will be 312 electoral votes for Trump and 226 for Harris. And Trump @ 312 was by far the most common outcome in our simulations, occurring 6 percent of the time. In fact, Trump 312/Harris 226 is the huge spike you see in our electoral vote distribution chart:

[Interactive graph]

The difference between 20 percent (the share of times Trump won all 7 swing states) and 6 percent (his getting exactly 312 electoral votes) is because sometimes, Trump winning all the swing states was part of a complete landslide where he penetrated further into blue territory. Conditional on winning all 7 swing states, for instance, Trump had a 22 percent chance of also winning New Mexico, a 21 percent chance at Minnesota, 19 percent in New Hampshire, 16 percent in Maine, 11 percent in Nebraska’s 2nd Congressional District, and 10 percent in Virginia. Trump won more than 312 electoral votes in 16 percent of our simulations.

But on Tuesday, there weren’t any upsets in the other states. So not only did Trump win with exactly 312 electoral votes, he also won with the exact map that occurred most often in our simulations, counting all 50 states, the District of Columbia and the congressional districts in Nebraska and Maine.

I don't know of an intuitive test for whether a forecast of a non-repeating event was well-reasoned (see, also, the lively debate over the performance of prediction markets), but this is Silver's initial defense of his 50-50 forecast. I'm unconvinced - if the modal outcome of the model was the actual result of the election, does that vindicate its internal correlations, indict its confidence in its output, both, neither... ? But I don't think it's irreconcilable that the model's modal outcome being real vindicates its internal correlations AND that its certainty was limited by the quality of the available data, so this hasn't lowered my opinion of Silver, either.

9
Jump in the discussion.

No email address required.

Nate wasn't heralding before the election that this 6% was the modal outcome, it wasn't really useful information.

I don't have links or citations, and most of his commentary was paywalled so I only saw the public-facing free posts, but as far as I remember, he very much made the point that the '50-50' odds didn't mean the final result would be a 50-50 split. His article on polls 'herding' very much pointed out that polls had a standard margin of error, and thanks to herding it was impossible to say if they would fall one way (polls systematically undercount Kamala's support, and she sweeps the 7 swing states) or the other (polls undercount Trump and he sweeps the 7 swing states). However, by far the most likely outcome was one or the other. I don't think he specifically called out the modal outcome (Trumps wins 312 EC votes) as such, but it was clear to me going in that the final result of the night would be a) very close in the popular vote and b) probably a blowout in the EC.

I was liveblogging the Election Night with my high school 'Government & Economics' class, and I sent them Silver's article on herding for the class to read beforehand, with this commentary:

There's a statistical concept called 'herding' that seems to be affecting many (most?) swing-state polls. Pollsters don't want to be wrong, or at least not more wrong than the rest of the field, so if their poll shows results that are different than the average, they stick 'em in a filing cabinet somewhere and don't publish them. The problem is, we don't know what those unpublished polls say, so the state of the race may be considerably different than the current forecasts -- either more in Kamala Harris' favor, or Trump's. It's very unlikely for this election to be a blowout in the popular vote (though a small swing in popular vote could result in a major electoral college win for one candidate) but be warned that the Presidential results may be quite a bit different than your current expectations.

I followed Silver's model closely, as well as Polymarket, and I was not surprised by the Election Night results. I understood that there was a lot of uncertainty, and that 'garbage in, garbage out' in terms of polls herding (and in terms of that Selzer poll), and I found myself highly impressed at Silver's analysis of the race.

And here was my commentary at the end of Election Night:

the polls were absolutely right about how close this election was. Trump's results tonight are very much within the 'expected error' for most polls -- he isn't winning by 5% or 10% nation wide. The polls indicated that Kamala was favored to win the popular vote by about 1%, but with 'error bars' of +/- 3% or so. Trump is currently expected to win the national popular vote by about 1%, which is a difference of 2%. That small amount is enough to push a bunch of swing states into his win column in the Electoral Vote count, but I want to emphasize that even though he's favored to win, and he almost certainly will win a huge majority in the Electoral College, this was still a nail-biter of an election.

this was still a nail-biter of an election.

It wasn't as close as 2020 in terms of the number of votes, but it was still a margin of ~300k in the key swing states between a Trump win and a Harris victory.

How can news sites call it so early if it's such a small margin at the end?

The amount of votes you need to form a representative sample is smaller than a lot of people think. So once you have the first few thousand votes counted in any given county, you have a very very good sense of how the rest of that vote in the county will be distributed with a relatively small margin of error. Based on that, after a certain number of counties start reporting results, you can often quickly reach a point in some of the more lopsided states where regardless of the distribution of votes in future counties the vote is already effectively decided. And on closer states like the swing states once all the areas are reporting and have a large enough sample of results, even what seem like relatively small margins (like 51% to 48%) can give you the confidence to call a final result on the more-or-less ironclad assumption that the rest of the votes to be counted will have very similar distribution.

It's really only on the very very close races that it might take more than a day, or multiple days, to arrive at a result.

Pollsters don't want to be wrong, or at least not more wrong than the rest of the field, so if their poll shows results that are different than the average, they stick 'em in a filing cabinet somewhere and don't publish them.

We should require pre-registration of polls. Have some of the major news networks say they won't publish them unless they are registered, in advance, with a clear notion of when they will take place and when the results will be reported.