site banner

Culture War Roundup for the week of October 28, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

6
Jump in the discussion.

No email address required.

Both Emerson and Atlasintel are on watch as some of the worst herders this year. Emerson is especially bad. I'd trust Selzer over these guys just based on reputation beforehand, but especially after learning they're cooking the statistical books.

none of the polls he's including are unweighted; when pollsters get results, they weight them depending on their predicted turnout which is heavily biased towards the last similar election which will necessarily reduce variance

his assumptions are just wrong: no, they won't be a binomial distribution even in theory, and almost no one uses random phone dialer to randomly select voters; they have models and try to find the voters which fit the proxies in their models through panels, surveys, mixed-mode contacts, etc.

His article isn't about "herding" as to how he defines it using his chosen metrics, it's a complaint by an alleged model maker that the people's who work he's relying on stealing don't have more variance which would make his model better. The article is really about pollsters being cowards, which make the almost guaranteed Silver final prediction being 50:50 all the more funny. Nate's a coward only because the people whose work he steals relies on are cowards!

This isn't herding which Silver admits when he refers to 538 penalizing late polls which move towards released polls calling it herding. NYT/Sienna, Washington Post, ABC, the "golds standards," etc., are herding when they release a poll in Sept which is Harris+5 and then just so happen to conveniently get a near identical prediction as Emerson which has been claiming a close race all along. Those pollsters aren't just honest non-manipulators who happened to get Harris+1/Tie as their final result joining all the dirty manipulators.

It's honestly perplexing to me why Silver continues to have such high esteem in these spaces, but it's easy money every election cycle.

cooking the statistical books

pollsters cook statistical books when they weight and predict turnout the same way Nate Silver "cooks the statistical books" when he weights polls in his glorified poll average

Selzer isn't a coward by releasing a Harris+3 for Iowa, but whatever the reason for it and I have my speculations, she'll get a double digit miss as a result.

Polls have long weighted their results, but there's ways to do it well, and ways to do it poorly. The goal is to calibrate demographic metrics based on likely voter data to create a facsimile of a perfectly representative sample. Getting it correct makes for more accurate polls. Getting it wrong in an innocent way can lead to mixups like 2016, where there was insufficient weighting by education especially in swing states.

But with degrees of freedom comes the ability to misuse it, where pollsters can coax their models to produce results they think are "better". Or they can just not release results that show something they think is strange. No matter what happens, polls should still show something resembling a normal distribution. They should create their model first, then enter in their results and see what pops out. The fact they're not getting a normal distribution is evidence that they're looking at results, then tweaking the model and rerunning afterwards, effectively mangling the results into whatever they desire. The fact that this is very prominent around a few polling houses and not others should be an indication that something is wrong.

Selzer isn't a coward by releasing a Harris+3 for Iowa, but whatever the reason for it and I have my speculations, she'll get a double digit miss as a result.

I'd gladly take an even-money bet that Selzer is off by less than 10 points.

The point was trying to make is that because they're all weighted and because the underlying data are not random, you would not expect to see the variance numbers Silver is using as his thresholds. The claim of binomial distribution around +-6 is dependent on the assumptions Silver is making, but those assumptions are wrong. Polls are done individually, they're tailored individually. When Silver writes articles like this, he comes off as someone who would be actually lost if he ever attempted to conduct a poll and then he makes a bunch of statements about polling generally using assumptions which are just wrong for polling.

Polling with predictive capacity is extraordinarily difficult. The end prediction is what matters, not the particulars of the models they use. The models are a tool, they're not the entire prediction. Of course I think pollsters who adjust their model to better fit what they believe is the correct result and they should. I have far more respect for individual pollsters than aggregators, who I consider to just be poll readers.

If I was Selzer, I would have gone back and tried again because her poll is ridiculous for a variety of reasons we can see in the information released about the demos she reached and what they care about. I understand aggregators don't like this and they don't like when variance shrinks, but the pollsters are the ones making the predictions more so than the aggregators stealing their work.

I'd gladly take an even-money bet that Selzer is off by less than 10 points.

Well, I didn't expect a response like this and didn't check back here. I am already overbudget on political bets this season, but I likely would have taken this depending on the amount and hassle to set this up. Bravo, though! I love to see people willing to put money down on their predictions.

Selzer is currently off by a whopping 16.5%. I should go back and look at my model for Iowa and why I undestimated Trump by 3 points. Selzer sold her credibility in an attempt to motivate downtrodden Democrats into thinking it was still possible. No serious person could have looked at some of the results from that poll, e.g., abortion most important topic, among others, and take it at face value; anyone who did should be discounted if not entirely ignored going forward.

For what it's worth, I'll cop to the fact that I would have lost the bet. Selzer had a pretty stellar record before, but this was a massive, high-profile mistake that she'll likely never recover from, at least not fully.

That’s not true according to Silver. https://www.natesilver.net/p/theres-more-herding-in-swing-state

Yes Emerson herds but Atlasintel he is showing as one of the higher quality ones. Also they’ve been very accurate in the past.

The list in that article isn't a list of all pollers, it's just the ones that he's accusing of herding. Atlasintel is borderline. It only looks OK relative to Emerson, where the evidence is more incontrovertible.

That’s not how Silver is framing it. He states:

By contrast, the most highly-rated polling firms like the Washington Post show much less evidence of herding. YouGov has actually had fewer close polls than you’d expect, although that’s partly because they’ve tended to be one of Harris’s best pollsters, so their surveys often gravitate toward numbers like Harris +3 rather than showing a tie.

Note that WaPo has the same odds of herding as AtlesIntel. So if Silver thinks WaPo isn’t herding, then he thinks atlasintel isn’t either.

Alright, yeah I've reread it and you're correct.

are on watch as some of the worst herders

I assume you're referring to the chart titled 'Which pollsters are the biggest herders?'. Unless I'm reading this wrong AtlasIntel appears to be doing little or no herding, as their 'Actual' total of small margin polls matches the 'Theory' total of small margin polls. The smaller the fraction in the 'Odds against...' column, the more herding they are doing right? By my reading Redfield, Emerson and InsiderAdvantage are herding most, while AtlasIntel, WaPo and Rasmussen are doing the least.

You are right and the other poster is wrong. Read the article and not what Ben Garrison stated.

Like I said to the other guy, that chart does not include all pollsters, it just includes the ones that show the worst signs of herding. AtlasIntel is borderline, and only looks ok next to egregious examples like Redfield and Wilton.

And you seemed to miss the context where Silver said WaPo is one of the high quality non herding. Silver had them as the same odds as Atlasintel. So Silver, who published the article, clearly disagrees with your assertion.