site banner

Nate Silver: The model exactly predicted the most likely election map

natesilver.net

Key excerpt (But it's worth reading the full thing):

But the real value-add of the model is not just in calculating who’s ahead in the polling average. Rather, it’s in understanding the uncertainties in the data: how accurate polls are in practice, and how these errors are correlated between the states. The final margins on Tuesday were actually quite close to the polling averages in the swing states, though less so in blue states, as I’ll discuss in a moment. But this was more or less a textbook illustration of the normal-sized polling error that we frequently wrote about [paid only; basically says that the polling errors could be correlated be correlated between states]. When polls miss low on Trump in one key state, they probably also will in most or all of the others.

In fact, because polling errors are highly correlated between states — and because Trump was ahead in 5 of the 7 swing states anyway — a Trump sweep of the swing states was actually our most common scenario, occurring in 20 percent of simulations. Following the same logic, the second most common outcome, happening 14 percent of the time, was a Harris swing state sweep.6

[Interactive table]

Relatedly, the final Electoral College tally will be 312 electoral votes for Trump and 226 for Harris. And Trump @ 312 was by far the most common outcome in our simulations, occurring 6 percent of the time. In fact, Trump 312/Harris 226 is the huge spike you see in our electoral vote distribution chart:

[Interactive graph]

The difference between 20 percent (the share of times Trump won all 7 swing states) and 6 percent (his getting exactly 312 electoral votes) is because sometimes, Trump winning all the swing states was part of a complete landslide where he penetrated further into blue territory. Conditional on winning all 7 swing states, for instance, Trump had a 22 percent chance of also winning New Mexico, a 21 percent chance at Minnesota, 19 percent in New Hampshire, 16 percent in Maine, 11 percent in Nebraska’s 2nd Congressional District, and 10 percent in Virginia. Trump won more than 312 electoral votes in 16 percent of our simulations.

But on Tuesday, there weren’t any upsets in the other states. So not only did Trump win with exactly 312 electoral votes, he also won with the exact map that occurred most often in our simulations, counting all 50 states, the District of Columbia and the congressional districts in Nebraska and Maine.

I don't know of an intuitive test for whether a forecast of a non-repeating event was well-reasoned (see, also, the lively debate over the performance of prediction markets), but this is Silver's initial defense of his 50-50 forecast. I'm unconvinced - if the modal outcome of the model was the actual result of the election, does that vindicate its internal correlations, indict its confidence in its output, both, neither... ? But I don't think it's irreconcilable that the model's modal outcome being real vindicates its internal correlations AND that its certainty was limited by the quality of the available data, so this hasn't lowered my opinion of Silver, either.

9
Jump in the discussion.

No email address required.

Is this the same Nate Silver, the mighty predictor? Or somebody impersonating him? https://x.com/RyanGirdusky/status/1855215191102750879/photo/1

More seriously, though, I just can't understand any meaningful way in which you can accurately predict Trump winning 312 electoral votes and then accurately predict his chances of winning the electoral college is a coin toss. These don't seem to be compatible in any sensible way. Maybe you can invent some statistical trick to make it sound good but on the plain common sense meaning it just makes zero sense.

Nate did nothing wrong. He is in the profession of polling, and must therefore believe that polls are directionally correct with some margin for error. He's then spent this entire life creating better priors, correlational models and ensembles to reduce that 'margin for error'.

A doctor doesn't question if germs exist. A mechanical engineer doesn't question Newton's laws. Similarly, Nate is incapable of questioning if polls contain any signal what-so-ever. Modern polling is in it's 2008 CDOs phase. You can't take 100 bad loans and roll them up into a AAA financial vehicle by citing diversification. Similarly, you can't ensemble broken polling data into any information of value.

Polls are useless for 2 reasons:

  • When the margins are narrow (2020, 2024), polls are too noisy to get anything of value
  • When the margins are large (2008, 1996), no one needs polls. The vibes are obvious

In a year with a wild card ex-president, incumbent president withdrawing, a VP who has never fought a competitive race, an assassination attempt, a fresh war in Israel, a lumbering war in Ukraine and a technically strong economy with terrible optics (lingering inflation from 2020-2021) ............ all your priors go into the dumps.

Even when polling does work (not often), it assumes a 'normal' year. In a "normal" setup, 3 things would have gone differently:

  1. No assassination attempt (at least not an unsuccessful one)
  2. Biden never runs and Dems choose candidate in open primary. Likely choosing a Middle-America candidate with better speaking chops than Kamala.
  3. 7th October didn't happen. (splintered the Dems. The actual war & muslim votes don't matter. But white progressives do, and they broke rank)

If these 3 hadn't happened, Trump would've still won. But, the Dem candidate could totally have flipped AZ, NV, Wisc & Mi. Still 2 short and PA was going Trump either way. However, in this world, Nate's predictions would have been a good proxy for the real results.

Alas, that never came to be.


Nate's 2023 victory dance is revealing. [1]

You may have heard the phrase the plural of anecdote is not data. It turns out that this is a misquote. The original aphorism, by the political scientist Ray Wolfinger, was just the opposite: The plural of anecdote is data.

Wolfinger’s formulation makes sense: Data does not have a virgin birth. It comes to us from somewhere. Someone set up a procedure to collect and record it. Sometimes this person is a scientist, but she also could be a journalist. -Nate Silver.

In writing this paragraph, Nate Silver fails to understand why the quote : "The plural of anecdote is not data" took off, and dooms his predictions for good.

[1] https://fivethirtyeight.com/features/what-the-fox-knows/


I'll leave you with my favorite stats quotes:

Statistics are like bikinis. What they reveal is suggestive, but what they conceal is vital -Aaron Levenstein

There are 3 kind of lies : Lies, Damn lies and Statistics - Mark Twain (maybe)

The plural of anecdote is not data - Not Ray Wolfinger

/images/17312889422478352.webp

Nate did nothing wrong.

I disagree. If he were a modest statistician that would just work his models the best way he can and produce the results, and let people interpret them as they will, without pretense, then he'd done nothing wrong. But as my link above suggests, he thinks his models reveal the deeper truth about the actual structure of the world, and his way of revealing it is superior to any other possible way - so much superior, that he is justified treating anyone who suggests the world may be different from what his models suggest with condescension and disdain similar to how a physicist would treat somebody who denies existence of Newton's laws. And the problem with it, of course, that this pretense of superiority is revealed, again and again, as false, and then complicated explanations are concocted why he technically has been correct all along and only by some weird fluke his opponents have been appearing to be correct. I think this is wrong. If you are in prediction business, and you predict wrong, you should at least eat the crow and be humbled. Otherwise you are in a scamming the gullible business.

You can easily imagine such forecasts

for example imagine this forecast

Trump wins every swing state but nothing else (30%) Trump wins Every swing state plus some extra (5%) trump wins split of swing states and wins election (15%)

Harris wins every swing state (25%) Harris wins split decision of swing states (20%) harris wins swing states + some extra (5%) (where swing is GA, NC, Mi, Wi, Penn, NV, AZ)

basically this forecast would be 50/50 and forecasts correlated polling error being a very strong effect.

For years now (since at least 2018), whenever my parents ask me "who's going to win?", I point them to Nate Silver and says I trust whatever he says. And I don't think I have been proven wrong yet. For this election, my gut was telling me Harris was gonna win, I won't deny that I was only reading stories and news from very left-leaning-space, but I have very incredibly high stakes in this election but no responsibility (still on work visa) and venturing into right-leaning-space meant more mental energy than I could bear. Even on election night, while scrolling through fivethirtyeight's prediction thread, I keep telling myself "how can these guys think Trump is gonna win?". The only voice of reason in my head was Nate Silver (and on some level Ezra Klein and that Astead/Run Up podcast from the NYT). Silve himself said "Don't trust your gut" a week or two before the election and look at that, my gut was wrong. This election has proven to me once again that polling works, and that data doesn't lie, it's the people that read the data that lies to themselves and others.

I am unimpressed with those who have called out Nate as supposedly being obviously wrong about [latest election] without looking at his overall track record and comparing their own with his.

In any given election, if Nate doesn't absolutely nail it, he inevitably gets a barrage of criticism. But how many of his critics merely end up being "broken clocks" who tout their overperformance over Nate in one election, only to crash and burn on the next one?

How many of those confidently predicting Trump this time (or in 2016) did well predicting elections from 2018-2022? How many pet theories about "here's why you should ignore the polling averages this time and trust in [X]" consistently work?

This isn't Nate's first rodeo. He called both 2008 and 2012 as solidly for Obama when some people had elaborate theories on why the races were tossups or the polls were wrong. But it's not just a pro-Democratic bias: he called the 2014 midterms for Republicans even when some Democrats over-learnt the lesson of 2012 to believe that polls would be biased in their favour.

Contrary to what some people in this thread appear to believe, Nate doesn't just hedge everything near to 50:50 and claim he was always approximately right, or do nothing more than aggregate poll results. Nor does he claim to never have made a prediction that was wrong ex ante- he admits he was fundamentally wrong about Trump's 2016 primary victory.

Early in his career, Nate got an undeserved reputation as being almost clairvoyant, which he explicitly disavows. It's left him open to criticism whenever he doesn't reach this unrealistic target. But overall, I think he's consistently been a better political prognisticator than anyone else.

Selzer was also an oracle up until random number dialing in Iowa stopped working. The shitting on Nate is mean-spirited, but the reality is that polling methods haven't adjusted for Trump and the response bias issue with his supporters - not to mention the way the game has changed every election he's fought. The models were, I'm sure, statistically beautiful, but garbage in garbage out.

Selzer deserves to be knocked down a peg (although I think "totally ignored" as some want it seems excessive). But that's mainly because she was incredibly far off on the final result. The equivalent for Nate would be if he were predicting a Harris +7 PV win or similar.

With hindsight, given that Trump has been underestimated three times in a row now, it seems reasonable to think that polls have likely systematically underestimated Trump. But before we got the results, it was only twice in a row, and so a lot more likely to be a coincidence. Nate has written extensively on how hard is to predict the direction of a polling error, and many have been burnt assuming that one or two polling errors in a row necessarily predict a polling error in the same direction in the next election. And with Trump off the ballot in 2028, we'll be back to square one.

"Selzer was also an oracle up until random number dialing in Iowa stopped working." I certainly did...back in 2004, when she confidently predicted a Kerry win in Iowa.

Anyne looking at Selzer's methodology should be discounting her on that basis alone.

"...the reality is that polling methods haven't adjusted for Trump and the response bias issue with his supporters..."

This is emphatically not a Trump issue

The polls did better this time than 2016 and 2020. At least, in general.

The controversy about polls starts in 2016. I think this is worth emphasizing, because there are still arguments floating around that the polls in 2016 were fine. And thus every subsequent argument about polls is really a proxy war over 2016. Because 8 years later we're still talking about Trump, we're still discussing how the polls over- or under-estimate Trump. We're still discussing how the polls do or don't measure white rural voters.

In 2016 the polls were entirely wrong. For months they predicted Hillary winning by a large margin blowout, sometimes by 10+ points. (I remember sitting in class listening to a friend excitedly gossip about Texas flipping blue.) Toward election day itself, the polls converged, but still comfortably for Hillary. And when Trump won, and the argument came around that the results were technically within the margin of error -- it missed entirely that whole states were modeled vastly incorrectly. The blue wall states of Pennsylvania Wisconsin and Michigan were not supposed to have gone red. Florida was supposed to have been close. States that had once been swing states were not even close. (To. me, this was the smoking gun that Trump had a real chance in 2016: Iowa and Ohio were solidly predicted for Trump from the very beginning, and no one offered any introspection on what that implied as a general swing.)

2020 was not much better. Without getting into claims about fraud and states: Biden was also supposed to win by larger margins than many states in fact showed. There were still lots of specific misses (like Florida redding hard). And again a series of justifications that polling did just fine because, technically, everything was inside some margin of error.

2024 is actually much better. AtlasIntel and Polymarket both broadly predicted exactly what happened. Rasmussen was fairly accurate (after taking a break in 2020 if I remember correctly). There's also a lot of slop. Selzer's reputation is destroyed (actually people may forget all about it by 2028). The RCP national average was off by a few points. Ipsos and NPR and Morning Consult and the Times were all wrong. Well, maybe that's not much better than 2020 -- but mixed in with all the bad data were predictors who got everything exactly right.

So Nate Silver's problem is that his method is junk. He takes some averages and models them out. The problem is that a lot of the data he relies on is bad. A lot of the polling industry is still wrong. And unless Silver is willing to stake a lot of expertise on highly specific questions about counties and polls, he can't offer all that much insight.

So Nate Silver's problem is that his method is junk. He takes some averages and models them out. The problem is that a lot of the data he relies on is bad.

I’m more sympathetic to the pollsters than I am to Nate. The pollster’s job is to poll people using a reasonable methodology and report the data, not to make predictions. They can’t just arbitrarily add Trump +3 to their sample because they think they didn’t capture enough Trump voters in their samples.

Nate’s job is explicitly to build a model that predicts things. He can legitimately adjust for things like industry polling bias. He doesn’t because he’s bad at his job.

don't the pollers have some degree of freedom because they sample based on demographics and not purely random. presumably they use this to perform adjustments. i also assume they poll the chance of the person voting as well and don't just make that number up.

They try but fundamentally, IMO, it’s a good idea to separate data collection and model building.

What's wrong with his method? How could he have improved it?

It relies on there not being a consistent (statistical, not political, although in this case it's probably both) bias in the inputs; ie. the polls.

As I recall Silver actually rates Atlas (who absolutely nailed every swing state) pretty highly, unlike (say) RCP -- but I don't think his pollster confidence correction really amounts to anything huge -- in the end he's basically aggregating ~all the polls (he does throw some out), and if the polls are wrong, so are his model.

Based on the polls, his model was probably correct that the election was roughly a coin toss -- but his aggregation ended up favouring Kamala roughly 2-3 points (ED: vs actual results) in all the swing states, which is badly wrong and not in fact inside the error margin of an aggregation of a bunch of polls at +/- 3%.

So his statewise model is probably pretty good -- I missed the flashy toolkit he had where you could choose results for some states and see the likely shifts in others this time around -- I'll bet if you plugged Atlas' polls alone into the model, it would have had Trump at like 80%. But he didn't do that, he relied on a bunch of polls that he noted showed obvious herding towards 50% and the (cope) hypothesis that the pollsters might possibly have corrected their anti-Trump lean and be herding towards 50/50 because... they were too scared to predict a Kamala win or something?

I guess the ballsy thing for Silver to do would have been, upon noting the herding, to toss out all of the polls showing signs of this, and see what was left.

This would have (probably) had a negative impact on his subscriptions though -- so whether it was greed or his personal anti-Trump inclination, he apparently doesn't really live on The River anymore after all.

Nate's whole schtick is having a fixed model that he pre-commits to ahead of time. He wants to avoid as much judgement calls as he can. It gives the air of scientific objectivity. You can follow someone else that makes judgment calls as the race progresses, but will they be more accurate over time?

One way to rank forecasters would be by assigning them an error score for each prediction miss, weighted by an superlinear factor of the odds miss (say, (100%-prediction)^2). So Nate would get a small penalty for winding up at 51% for Kamala before the election compared to someone that guessed 90% for Kamala. Who would have the best score over multiple cycles?

Sure, I respect his stance on the 'precommit vs tinkering' spectrum -- but you don't get to precommit to a model that turns out to be wrong and try to spin it as being right all along.

If he updates his model along the lines of throwing out polls showing evidence of tinkering, maybe he can be right next time -- but this time he was not.

It is within the margin of error because his model allows for a systemic correlated error across all polls. He just doesn't make any assumptions about the direction of that error. What some people are suggesting he do is assume the direction and size of this error based on very little evidence.

That's not what I'm talking about -- his inputs to the model are an aggregation of polls; he shows you them (for swing states) on the "Silver Bulletin Election Forecast" page.

Since each these is an aggregation of 5-6 polls with a sampling error in the area of +/-3%, the statistical error on Silver's aggregation should be well less than +/- 1% -- the fact that they all ended up more like +3D means that these polls are bad, and if he can't make the correction (due to lack of information, or lack of willingness to call out political bias) he shouldn't be using them.

He even had a framework for this! There was a whole post where he identified the worst herders -- removing these ones from his model would have been trivial, but he didn't do it. Leading to model inputs that were biased ~+3D -- which is the strongest argument that his 'coin flip' EC forecast was in fact a bad prediction -- how could it be a good prediction with such inaccurate input data?

There's doing the thing and then there's talking about doing the thing.

One of the unfortunate results of social media and the internet more generally is that it allows people to show you what they're doing in another domain. This is everything from innocent cooking videos, to building things, all the way up to a statistical analyst (Silver) showing you what and how he is statistically analyzing. While this may let new people learn things they previously had limited information on, I believe it corrupts the thing itself (i.e. the cooking, the building, the numbers crunching).

Why? Because the focus of the "creator" turns to audience views, approval, satisfaction etc. instead of doing the thing itself well. Cormac McCarthy once said he never hung out with writers because, when writers get together, they talk about writing - where any writer who was serious would just go and write!

The French Gambler is a great antagonist to Nate Silver. He didn't care about explaining himself, he didn't need to find a way to capture and entertain an audience. He focus on the thing - betting intelligently on the election - and he executed it well.

I've toyed around with starting my own blog (I won't share about what because it's niche enough as to be identifiable) but this is the number one reason why I haven't so far. I worry that the thing I'll be writing about will suffer because my focus will be writing-about-the-thing instead of doing-the-thing.

I think that's what's happened to Nate. He's become so focused on writing about how is models work, or how the polls are biased that he's no longer spending the mental bandwidth necessary to build the best possible model. How could you even do that when you're setting daily print deadlines for yourself. Furthermore, a lot of the time, models have very weak explanatory and/or predictive power. How do you maintain an audience by writing "Eh, looks kind of ... like nothing. Whatever." Nope. You need 1,000 words that "examine the correlate intricacies and inherent cognitive blah blah blah blah."


I don't know what Nate Silver wants. Does he want to write about probability and statistics? Does he want to build models to predict events? Does he want to play professional poker? It doesn't matter, but he should probably choose one thing to do and another, different thing, to talk about doing. When you stack all of that on top of one another, you get a 50/50 chance of being either wrong or insufferable.

Does he want to write about probability and statistics? Does he want to build models to predict events? Does he want to play professional poker?

He wants to do all three. He's built a very succesful substack that earns plenty of money and I think he has a moderately succesful pro poker career too, I don't think he's missing out by not hyper specializing.

And if anyone thinks they have better estimates of who'll win elections than Nate's models, feel free to bet against him. Either through prediction markets, or if you're willing to bet a large sum he'll probably be down to do a direct bet against you through an escrow service.

I'm pretty sure the French gambler could've just bet directly against Nate Silver too and probably could've gotten better odds/smaller fees than going through Polymarkert, for at least a portion of his bet.

The sad thing is that the “correlated errors” aren’t based on polling data or past results, they are just an arbitrary adjustment he adds to the model based on feels. Like he literally codes in “the sun belt states are likely to be correlated” etc.

After this election I am totally convinced Silver a fraud. He simply can’t admit that there is a polling industry bias. His techniques make it impossible to account for this accurately because they are based on weighted polling averages, where really he needs to add a bias term, which he refuses to do.

To elaborate on that, if literally all the polls miss left, you can’t fix that with weighting. In reality, he would have needed to put all of the weight on AtlasIntel and Rasmussen and close to 0 on everything else. This shows that weighting is the wrong approach.

Edit: He does have “house effects” but this adjust polls towards the weighted average, not towards historical results. So it doesn’t help.

Yeah that article where he explained the house effects modelling had me screaming at my monitor.

Like, you've noticed that pollsters are herding, and then you correct for house effects by... measuring how different their predictions are from the median pollster!? WTF Nate.

The bias term is the polling error. The reason he treats it as an error rather than a predictable bias is because he doesn't think it's predictable. Assuming it is predictable based on two elections where it was actually pretty different, even if it was in the same direction both times (something that had a 50% chance of happening even if it were completely random) risks over fitting the model.

To elaborate on that, if literally all the polls miss left, you can’t fix that with weighting. In reality, he would have needed to put all of the weight on AtlasIntel and Rasmussen and close to 0 on everything else. This shows that weighting is the wrong approach.

No, it shows there was a polling error. Let's say he follows your advice and the polling error is in favour of the Democrats in the next election. Then his model would have been really really inaccurate.

Of course, if there really is a consistent bias in favour of Republicans, then it would make it more accurate, but there isn't much data to make that assumption.

The reason he treats it as an error rather than a predictable bias is because he doesn't think it's predictable.

And this is why he is an idiot. The pollsters all understand at this point that it is inherently due to a predictable non-response bias. As a fall back, many used the recalled vote to reweight the sample. But given the unusually high turnout for Dems in 2020, the recalled vote begs the question and was a sandbag for Trump.

Understanding this, unlike professional idiot Nate Silver, I made some heavy bets for Trump and won a good chunk of change.

How do you know you didn't just get lucky with a coin flip?

How do I know? I know because I know that my reasoning was solid and strongly supported by the available data.

How do you know that I know? You don’t. But I’m not really here to convince you. I’m here to make fun of Nate Silver for predicting nothing yet declaring victory.

Massive cope. His model got a few small things right, it got the big things people actually care about wrong. As usual he will hide behind the “things with <50% probability still happen!” defense, but this is just sophistry as it was never 50/50

He liked to tout how in 2016 he allowed the errors of different polls to be correlated, rather than being purely independent experiments. But at the end of the day all this does is increase the uncertainty and expand the error bars. If you keep doing this and allowing for more error here or there it tends your “prediction” towards throwing up its hands and saying “idk it’s a coin flip”, which is what happened to Nate and why he had to shut off his model so early on election night. He did plenty of bitching about “herding” of polls while he himself was herding towards coinflip. His big brag in 2016 was ultimately that he had herded towards 50/50 harder than anybody else.

In the prediction thread I called this for Trump with high confidence and said it was an “easy call” because there was ample evidence there for those with eyes to see. 2020 was an extremely close election and by every metric (polls, fundamentals, vibes, registration, mail in voting) Trump was better positioned this year than he was then. Nate can call everything a coin flip and cope all he wants but his credibility is shot

His big brag in 2016 was ultimately that he had herded towards 50/50 harder than anybody else.

He wasn't herding. "Trump can win this" was a contrarian viewpoint among people who see themselves as nonpartisan observers of public opinion.

His big brag in 2016 was ultimately that he had herded towards 50/50 harder than anybody else.

That isn't what herding means. Herding doesn't necessarily imply putting the finger on the scale towards a close race, it implies marshalling your results towards the average of other pollsters. After all, if everyone else is predicting a blow-out win and you predict a close race, you still look not only stupid , but exceptionally stupid if it is in fact a blow-out, whereas if you follow the crowd and predict a blow-out you only look as bad as everyone else if it turns out close.

Nate was doing the opposite of herding, if anything, in 2016. If Hillary wins that election very easily, Nate (possibly unfairly) looks stupid for constantly warning people about the significant possibility of a Trump victory. He looks good from 2016 precisely because he didn't follow the crowd of other modellers and gave Trump better odds than anyone else.

A child, when introduced to the concept of probability, gives equal weight to the possible outcomes. Two choices means 50/50 (a coin flip.) A pollster that isn't better than a coin flip is useless. You might as well ask a child. (I believe the children's election - 52/48 in favor of Harris - being +2 D, while being wrong, was more accurate than any of the left-leaning pollsters could muster.)

It's not useless if it's actually 50-50.

I get so triggered by this logic because it’s so wrong. Elections are not a football game. They are not actually a random variable. On November 4th the result was already set in stone, unless one of the candidates died or something. You could replay November 5th 1000 times and Trump would win 1000 times. It wasn’t 50/50. It can never be 50/50. It is always 100/0.

Epistemic uncertain is a feature of the model and its inputs, not some inherent feature of the real world. There was enough data to conclude with relatively high certainty that Trump was on pace to win. Nate’s model didn’t pick up on this because it sucks. It has high epidemic uncertainty because it’s a bad model with bad inputs.

There was enough data to conclude with relatively high certainty that Trump was on pace to win. Nate’s model didn’t pick up on this because it sucks.

There have certainly been elections which were decided by tiny margins. They might well decided by the contrast in weather between the red and the blue part of the state. Now, you can say that Nate's model sucks because it does not sufficiently predict the weather months in advance.

We can score predictors against each other. A predictor who gives you a 50/50 on anything, like 'the sun will rise tomorrow' or 'Angela Merkel will be elected US president' will score rather poorly. ACX had a whole article on predictor scoring. If there is someone who outperforms Nate over sufficiently many elections, then we might listen to them instead. "I bet the farm on Trump, Biden, Trump and doubled my net worth each time" might be a good starting point, especially if their local election prediction results are as impressive.

Unfortunately, I have not encountered a lot of these people.

If it was actually 50-50, why did he take down his real time election result projection?

I don't know anything about that or what point you're making.

He admitted well before E-day that anything more than a very crude real time projection needed far more resources than he had - he borderline told people to just go and look at the NYT needle, and probably only did his thing because it wasn't known until the last minute whether the needle would be active.

Everyone on the right called it with high confidence this time, unlike 2016 and 2020. Everyone on the left seems to call it for their guy with high confidence every election, so Dem/left predictions carry no weight. Nate will maintain his (undeserved) credibility by still being more accurate than most on the left.

Everyone on the left seems to call it for their guy with high confidence every election

Plenty of right-wing figures do this too, and got resultant egg on their faces in 2018, 2020 and 2022. It's hard to quantify but there are definitely a lot of left-wing pessimists and I don't think partisan boosterism is more prevalent on one side compared to the other.

In fact on twitter there were a lot of big right wing accounts predicting a Kamala win (legitimately or otherwise) shortly before the election.

And everyone who was right was a genius and everyone who was wrong was a fool (or a fraud) apparently.

This is not how probability works.

It's incredibly lazy to say that 'everyone on the right' and 'everyone on the left' called something' to make the specious point that your opponents statements are not meaningful. You might as well be saying 'my ingroup is better and more intelligent' than the outgroup'.

His big brag in 2016 was ultimately that he had herded towards 50/50 harder than anybody else.

Seriously.. Of course he is going to claim credit but he shouldn’t get credit for having his model hedge more than others. But apparently I don’t understand statistics because I think he shouldn’t get credit for hedging more.

Why shouldn't he get credit for hedging his model more than others?

Because absolutely no last-minute polls existed that justified his sudden shift the day prior to Election Day 2016. Nate knew something was wrong with the polling, and put his thumb on the scale to make Trump look better than his model said.

He should get credit for being well-calibrated. If he is always right with his confident predictions and mostly right with his hedged predictions, then he is doing the right thing.

It did incentivize him to hedge more and more until we get to this ridiculous point where he tries to take credit for a 50/50 prediction.

The problem is you can't evaluate how well the model did based on just the probability of winning, except for the correct or not question, and he was not correct in 2016. Maybe whatever he is doing in this post is good, but it sure didn't translate to the final prediction, so he doesn't get credit for that either.

Silver also noticed what I consider the defining difference between Democrats and Republicans: tolerance of dissent.

And there was an asymmetry. Republicans are generally happy when you agree with them partway or half the time. Admittedly, the sorts of Republicans who encounter our work are not a representative sample, probably being on the moderate side — though you can find plenty of Trump supporters in the Silver Bulletin comments section.

Democrats, however — and here, I’m not referring so much to Silver Bulletin subscribers but in the broader universe online — often get angry with you when you only halfway agree with them. And I really think this difference in personality profiles tells you a little something about why Trump won: Trump was happy to take on all comers, whereas with Democrats, disagreement on any hot-button topic (say, COVID school closures or Biden’s age) will have you cast out as a heretic. That’s not a good way to build a majority, and now Democrats no longer have one.

I think this explains the underrepresentation of democrats on this forum dedicated mostly to American politics. A republican encounters an opinion he disagrees with and is okay with its existence, a democrat encounters such a opinion and seeks a way to prevent its dissemination. Be it be banning a subreddit dedicated to a sitting pro-police POTUS on the grounds it encouraged violence against cops (while leaving pro-BLM and leftist anti-cop subreddits alone), by banning all those who post on dissident subreddits from posting on major subreddits controlled by left-leaning moderators, by denouncing X for allowing non-leftist interpretations of facts to be posted, etc.

Edit: And it was the left-leaning subreddit of ShitRedditSays which was the first to ban people for ideological disagreement, the first subreddit of "pure ideology".

The scant few leftists left on this forum are no exception: still clinging to the "It is a private company" defense for common carriers discriminating against non-leftists, even as "private companies" are not only legally forced service and employ people they would rather not, if they belong to the right demographics, but receive direct orders from the ruling regime about which opinions shouldn't be allowed to be read by consenting readers.

There is a great irony in a major American centre-left talking point being that Republicans are "burning books" for not parents deciding which books are appropriate for elementary school libraries, but government forcing common carriers to prevent a consenting adult sending another consenting adult legal information isn't in anyway contrary to the 1st.

I think the difference in requirements for agreement are due to the position of each group in the American dominance hierarchy. Democrats are still pretty dominant in most spheres, and therefore they don’t need to tolerate a situation in which they are hearing wrong-think. They don’t need Allies who are imperfect because they control most of the consensus building organs completely. Republicans and conservatives need those imperfect allies because they’re on the bottom of that hierarchy. They don’t wholesale own social media, in fact there’s only one social media platform out of 3-4 big ones where they aren’t actively suppressing conservatives. The6 therefore cannot simply move on if they hear something they don’t like. They’d have to cede the entire thing.

I do wonder if that's part of the shift in the youth vote. Youth tend to be somewhat rebellious. Yet, in almost every online forum, complete ideological purity is demanded, with increasing levels of Obvious Nonsense being declared doctrine, leading to utter hysteria. Any young person who observes that even one item of Obvious Nonsense is, in fact, Obvious Nonsense either learns extremely quickly to somehow suppress their intellect... or they promptly get banned from half the internet.

They say that social death is worse than real death. The internet basically is social life in [Current Year]. Thus, I'd imagine that seeing that even minor observations that Obvious Nonsense is, in fact, Obvious Nonsense gets one banned from half the internet (which is basically akin to social death) is significantly radicalizing, in one way or the other.

I don't think the underrepresentation of leftists on this forum is a mystery that needs to be explained. Leftists don't come here for the same reason that you (probably) don't hang out in tankie subreddits as a commentator. Because the content is not catered to them and they don't feel welcome amidst a flood of dissent.

People go where they're wanted and when it makes them feel good to go there.

Your explanation could perhaps (I could see it) point to why the place initially lost leftists, but once that starts happening it becomes a positive feedback loop that requires no other input.

Edit: apologies if this goes against the rules of the forum , but I absolutely have to point it out in this particular case because it's deeply, ironically humorous given what the comment says. I have been blocked preemptively by this commenter

I think this is because at the moment the left dominates a lot of the idea institutions like universities and the media. If it was the other way around I'm sure the right would be intolerant of different ideas and the left would be more accepting. There are people in both groups that are accepting of a free exchange of ideas but I think the majority or the people that actually end up in control of these movements have the opinion of free speech when I'm weak but controlled speech when I'm strong.

This makes me think of someone stuck on a very sticky wicket, trying to justify an argument that was fundamentally wrong. Of course there are facets of any sophisticated but wrong argument that are right. You can highlight the correct facets and minimize the wrong facets. You can pre-prepare reasons for why you might be wrong to conserve credibility.

Nate has the rhetorical skills to pull it off. But it still feels very slimy. The 90 IQ twitter pleb mocking him with '60,000 simulations and all you conclude is that it's a coin flip?' may not be that numerically literate. But he has hit on a certain kind of wisdom. The election wasn't a 50/50 or a dice-roll. It was one way or another. With superior knowledge you could've called it in Trump's favour. Maybe only Bezos and various Lords of the Algorithms, French Gamblers and Masters of Unseen Powers knew or suspected - but there was knowledge to be had.

I prefer prediction models that make money before the outcome is decided, not ones that have to be justified retroactively. Nate wasn't heralding before the election that this 6% was the modal outcome, it wasn't really useful information.

The election wasn't a 50/50 or a dice-roll. It was one way or another.

Before it happened, it wasn't. Even if you had universal legilmency and knew the political views of every voter as well as the voter knew themselves, the result could differ from the legilmency-poll because of differential turnout (which can be affected by unpollable things like the weather on polling day) or late swing (some voters actually change their minds in the 3-4 days between the field work being done for the eve-of-poll polls and the actual election).

If the exit polls are correct, the Brexit referendum was decided by people who made their mind up day-of.

It was one way or another. With superior knowledge you could've called it in Trump's favour. Maybe only Bezos and various Lords of the Algorithms, French Gamblers and Masters of Unseen Powers knew or suspected - but there was knowledge to be had.

That knowledge wasn't available to the model. A French gambler paying a hundred grand for a private poll specifically does so in order to possess information that others do not.

"My model produces unhelpful outputs because it has bad inputs" is still only an excuse at the end of the day. Nate is a pretty influential guy, famous, respected by many. Why doesn't he have six figures to spend on his own poll and make his model better? Do none of his rich friends trust him enough to invest in him?

"My model produces unhelpful outputs because it has bad inputs" is still only an excuse at the end of the day.

It's not a matter of the model having bad inputs. The model had all the publicly available inputs.

Why doesn't he have six figures to spend on his own poll and make his model better? Do none of his rich friends trust him enough to invest in him?

Anyone that would pay for that would want to keep the results private in order to better leverage them in some fashion. Otherwise why are they paying for better polling just to give it away to everyone? What return do they have to reap out of investing in a better prediction? The intrinsic value of better public polling?

I'd also comment that even polymarket was 50/50 for a while and then 60/40 in the days before the election.

Otherwise why are they paying for better polling just to give it away to everyone? What return do they have to reap out of investing in a better prediction? The intrinsic value of better public polling?

While I basically agree that Nate Silver did as good a job as possible, this is a real problem. Garbage in, garbage out. He built a model that relied on free public information, and the quality of that information has degraded over time. I think it's entirely possible that his "business model" (or whatever you want to call it) is no longer viable. Once upon a time there wasn't an Internet, and then there wasn't enough data on the Internet, but eventually we entered the age of Big Data. Now maybe it's ending.

One of the reasons we used to have good polls is that we had well-funded mainstream media sources that were interested in accurately reporting the state of reality. But funding went down, the number of sources doing ground-level reporting shrank, they've become more cautious about taking risks, and most importantly, many of them have stopped caring about reporting reality, and are more interested in shaping reality toward their preferred political pole, or almost worse, they just say whatever the current party line is.

Do you have a substantive disagreement with his argument?

If Nate put his money where his mouth was, he'd have lost $100,000. He talks the talk (after it's decided) but doesn't walk the walk when it actually means anything.

https://x.com/NateSilver538/status/1842211340720504895

Did the other guy send the contract?

He says so and that Nate later refused to sign.

Did the other guy provide proof that he sent the contract?

A Twitter exchange is in fact a form of contract -- so whether the guy sent Nate a piece of paper saying "I will pay Nate Silver 100K if Florida goes less than R +8, otherwise he will pay me", I think the terms of the bet were pretty clear.

I certainly wouldn't require Nate to pay up based on the Twitter exchange, but that would definitely be the Honourable thing to do -- he can probably afford it based on what he's charging on Substack alone, and it would be great degenerate-gambler PR for him to do so.

A Twitter exchange is in fact a form of contract -- so whether the guy sent Nate a piece of paper saying "I will pay Nate Silver 100K if Florida goes less than R +8, otherwise he will pay me", I think the terms of the bet were pretty clear.

...?

If the Twitter exchange is in fact a form of contract, then so is the stipulation of said Twitter exchange for the requisite next step- which includes Nate's condition that the other person send a formal contract via lawyer. If the guy sends a piece of paper saying what you say, it would be failing to meet the conditions of the terms of the Twitter-contract.

If the Twitter exchange is in fact a form of contract, then so is the stipulation of said Twitter exchange for the requisite next step- which includes Nate's condition that the other person send a formal contract via lawyer.

Just so -- that's why I wouldn't fault Nate for not paying up. But the whole point of honour culture is that one feels the need to go above and beyond what's legally required, even when it's to one's own detriment. It's not like the bet was unclear or something -- the sporting thing to do would be to chuckle and write a cheque.

More comments

This is tangential to my main point. Nate's beliefs about the world, if expressed, would've cost him a lot of money. There are probably large numbers of people who trusted Nate's modelling and lost money thinking 'oh well if Nate says it's 50/50 then I can profit from Polymarket being 70/30'.

I think Nate is trying to claw back lost prestige. It reeks of cope to say 'my model exactly predicted the most likely election map' when his model gave it a 6% chance of happening. He struggles to even say what he wanted to happen on election day from the perspective of 'what makes my model look good'. If you're not producing a useful, falsifiable prediction then what is the point?

The important thing is getting it right. I want models that give alpha. If I'm going to pay for a substack, I'd rather pay for someone who got it right - Dominic Cummings who said confidently that Kamala would lose, based on his special polling methods. He actually predicted a Trump victory vs Kamala 311-227 in 2023. He foresaw that Biden would likely become obviously senile, that the Dems needed a better candidate.

https://x.com/Dominic2306/status/1854275006064476410

https://x.com/Dominic2306/status/1854505847440715938

Let's say he had bet $100,000 at 50-50 odds that he wouldn't roll a six on a die. Then he rolls a six. Does that prove something about his beliefs? It's only profitable in expectation. There is no guarantee of making money.

To take the election example, 50-50 means losing half the time. It's only profitable because, when you do win, you win more than you would have otherwise lost.

If you're not producing a useful, falsifiable prediction then what is the point?

That is just not possible to get from a single sample. You need to look at his track record over many elections.

This is tangential to my main point.

If that is so, I accept this correction in good faith, and I do believe this elaboration of the main argument is substantially stronger. I am not attempting to change your opinion on Nate Silver's accuracy.

I am still curious of if there was ever evidence that Silver's bet was accepted, both for it's own sake in addressing the question and to updating priors, but an argument of 'he would have lost money if he made the bet' is a substantially different argument than 'he refused to respect his own challenge,' and if you did not mean the later interpretation I am grateful for the clarification.

There are probably large numbers of people who trusted Nate's modelling and lost money thinking 'oh well if Nate says it's 50/50 then I can profit from Polymarket being 70/30'.

how does it work?

To be fair, in the absence of Nate denying it, I don't think he necessarily needs to provide proof.

That would not be fair. In the absence of Nate confirming that he refused to sign a contract, a claim of having sent the contract is just a claim absent further evidence.

My curiosity / eyebrow is raised because Ranger is raising this bet as a character failure on the part of Nate Silver, but the proffered evidence is of the conditional offer of a bet, not that the bet was accepted as offered but that Silver refused to sign it.

This leads to a couple of issues for which more information than has been provided is needed.

-Did the other person actually accept the bet, or are they just claiming so with post-election hindsight? (i.e. is he talking the talk after the election is decided?)

-Did the person try to modify the terms of the bet offered that would render the offer void? (i.e. did he refuse to walk the walk when it mattered?)

-Did the person fail to meet the conditions of the offer of bet? (i.e. did they not have their lawyer do it, but tried to make their own contract- thus invoking the payment risk issue raised?)

I've no particular strong feeling on Nate Silver one way or another, but if someone wants to make a character failure accusation with linked evidence I'd generally prefer the links to be evidence of a character failure.

That would not be fair. In the absence of Nate confirming that he refused to sign a contract, a claim of having sent the contract is just a claim absent further evidence.

I disagree. All Nate has to do is say "no you didn't, you fucking liar", and if Keith can't provide evidence of sending him the contract, he's the one that's going to suffer reputational damage. On the other hand, if Nate says that, but Keith promptly provides evidence, this will look even worse for him. Since Nate knows for a fact whether or not he received the contract, his decision on how to react to the claim tells us something about the truth value of the claim that he was sent the contract.

There are also scenarios that would explain a lack of reaction. Maybe after the spat Nate blocked Keith, and has no knowledge that he's now going around claiming that he sent the contract. So while the lack of reaction doesn't outright prove the contract was sent, I maintain that the potential reputational damage that can result from the claim is a weak form of evidence in itself, and thus it is the demand to provide hard evidence that's unfair.

More comments

Nate wasn't heralding before the election that this 6% was the modal outcome, it wasn't really useful information.

I don't have links or citations, and most of his commentary was paywalled so I only saw the public-facing free posts, but as far as I remember, he very much made the point that the '50-50' odds didn't mean the final result would be a 50-50 split. His article on polls 'herding' very much pointed out that polls had a standard margin of error, and thanks to herding it was impossible to say if they would fall one way (polls systematically undercount Kamala's support, and she sweeps the 7 swing states) or the other (polls undercount Trump and he sweeps the 7 swing states). However, by far the most likely outcome was one or the other. I don't think he specifically called out the modal outcome (Trumps wins 312 EC votes) as such, but it was clear to me going in that the final result of the night would be a) very close in the popular vote and b) probably a blowout in the EC.

I was liveblogging the Election Night with my high school 'Government & Economics' class, and I sent them Silver's article on herding for the class to read beforehand, with this commentary:

There's a statistical concept called 'herding' that seems to be affecting many (most?) swing-state polls. Pollsters don't want to be wrong, or at least not more wrong than the rest of the field, so if their poll shows results that are different than the average, they stick 'em in a filing cabinet somewhere and don't publish them. The problem is, we don't know what those unpublished polls say, so the state of the race may be considerably different than the current forecasts -- either more in Kamala Harris' favor, or Trump's. It's very unlikely for this election to be a blowout in the popular vote (though a small swing in popular vote could result in a major electoral college win for one candidate) but be warned that the Presidential results may be quite a bit different than your current expectations.

I followed Silver's model closely, as well as Polymarket, and I was not surprised by the Election Night results. I understood that there was a lot of uncertainty, and that 'garbage in, garbage out' in terms of polls herding (and in terms of that Selzer poll), and I found myself highly impressed at Silver's analysis of the race.

And here was my commentary at the end of Election Night:

the polls were absolutely right about how close this election was. Trump's results tonight are very much within the 'expected error' for most polls -- he isn't winning by 5% or 10% nation wide. The polls indicated that Kamala was favored to win the popular vote by about 1%, but with 'error bars' of +/- 3% or so. Trump is currently expected to win the national popular vote by about 1%, which is a difference of 2%. That small amount is enough to push a bunch of swing states into his win column in the Electoral Vote count, but I want to emphasize that even though he's favored to win, and he almost certainly will win a huge majority in the Electoral College, this was still a nail-biter of an election.

this was still a nail-biter of an election.

It wasn't as close as 2020 in terms of the number of votes, but it was still a margin of ~300k in the key swing states between a Trump win and a Harris victory.

How can news sites call it so early if it's such a small margin at the end?

The amount of votes you need to form a representative sample is smaller than a lot of people think. So once you have the first few thousand votes counted in any given county, you have a very very good sense of how the rest of that vote in the county will be distributed with a relatively small margin of error. Based on that, after a certain number of counties start reporting results, you can often quickly reach a point in some of the more lopsided states where regardless of the distribution of votes in future counties the vote is already effectively decided. And on closer states like the swing states once all the areas are reporting and have a large enough sample of results, even what seem like relatively small margins (like 51% to 48%) can give you the confidence to call a final result on the more-or-less ironclad assumption that the rest of the votes to be counted will have very similar distribution.

It's really only on the very very close races that it might take more than a day, or multiple days, to arrive at a result.

Pollsters don't want to be wrong, or at least not more wrong than the rest of the field, so if their poll shows results that are different than the average, they stick 'em in a filing cabinet somewhere and don't publish them.

We should require pre-registration of polls. Have some of the major news networks say they won't publish them unless they are registered, in advance, with a clear notion of when they will take place and when the results will be reported.