site banner

Culture War Roundup for the week of October 14, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

3
Jump in the discussion.

No email address required.

I know very little about prediction markets, so can someone explain to me how likely it is that Trump's surge on for example Polymarket is the result more of speculative behavior than of people rationally trying to predict the winner of the election? I don't really see any reason to currently view the race as being anything other than pretty close to 50-50. People might say well, if I believe that then why not try to make some money on it? And maybe that's fair. But that does not necessarily mean that the betting odds on Polymarket are actually an accurate guide to the likely election outcome.

The issue isn't about how the bet resolves, speculation is about scalping price movements on the margins.

If a true-beliver wants to move the markets, they can, by buying a bunch of shares at a certain price (or prices). I one is certain Trump will win and one wants to make it look like there's some support, buying contracts at $0.55 (or thereabouts) is pretty rational because if he wins the investor more-or-less doubles their money.

For me, a person who believes the actual market is 50/50 and $0.50 is the right price for both contracts I can take advantage of these swings to scalp a few bucks with limit orders. Over time, I expect the prices to settle back toward the 'real price', which they have, so I just have to be patient and I can win a few bucks here and there based on volatility. The nice thing about markets (at least well designed and functional ones) is that they will always drift back to the correct price even if they fluctuate in strange ways.

My experience with these markets is that you have people with positions, true-believers and you have speculators. The speculators love to see market moves because they can scalp profits. The true-believers are taking out a bet.

It's all speculation. Unless you have insider info or some way to arb it, there is nothing rational about it.

how likely it is that Trump's surge on for example Polymarket is the result more of speculative behavior than of people rationally trying to predict the winner of the election?

Why not both? We know that polls at least frequently are inaccurate with trump on the ballot, it is rational behavior to speculate endlessly.

Polling inaccuracy is because of declining response rates resulting in oversampling of hyper-engaged partisans, which can't be controlled for, not whether or not Trump is on the ballot.

Other countries have let people bet on politics for a long time and no, they’re far from always accurate. Right before Brexit, the betting market hugely favored remaining in the EU for example.

Prediction markets are well-calibrated when there is sufficient liquidity.

A 20% chance is not a 0% chance. It will happen 1 in 5 times. When it does happen, the prediction market is not "wrong".

This is what Nate Silver had to say over and over again in the wake of 2016 when he made a "wrong" prediction by giving Trump only a 30% of winning. Most people simply fail to understand how prediction works.

Yes, but for a non-repeatable event it’s also very easy for a pollster to say they were right. After all, even someone who predicts a 95% likelihood of A winning can say “well, the 5% likelihood of B winning happened to be the outcome in this scenario, my forecast was in fact entirely correct” and this is completely unfalsifiable.

They were all wrong, but Nate was less wrong ,so that makes him the winner in this regard. His model was more accurate.

No, Nate isn't "less wrong" because 95% chance of winning and 70% chance of winning don't actually have a meaning in this context. How could you even make such a judgement? How do you know that if we had access to 100 different universes with the exact same 2016 race, Hillary doesn't win 95% of them?

It's absurd to claim that Nate Silver's model was more accurate because it gave marginally better odds of a Trump victory; that's not even getting into the fact that absolutely no new polls were available to Nate that showed a tightening race - this was purely Nate fudging the numbers because he knew something was off (something he used to constantly do with his sports prediction spreadsheets he made his bones on)

No, Nate isn't "less wrong" because 95% chance of winning and 70% chance of winning don't actually have a meaning in this context. How could you even make such a judgement? How do you know that if we had access to 100 different universes with the exact same 2016 race, Hillary doesn't win 95% of them?

I disagree. By this logic, no polling can be considered useful or skill does not exist in terms of polling. Obviously one cannot redo the election hundreds of times or split off many universes.

Nate wasn't doing polling, he was placing odds on election outcomes. Why you think "that logic" has anything do with the potential accuracy of any given poll is beyond me.

Yes, but for a non-repeatable event it’s also very easy for a pollster to say they were right.

I respect the thrust of this argument in general, but Nate Silver specifically came the closest to predicting Trump's victory out of the major pollsters. Most pollsters just look at the headline probabilities but fail to properly take conditional probabilities into account. They looked at poll after poll and did the fairly standard "average everything, find the STDEV, there's your confidence interval, 99% clinton victory." What made Nate Silver special is that his model accurately identified the sorts of universes in which trump was likely to win by finding out the ways in which various poll results and errors were correlated. That allowed him to more accurately assess the possibility of a systematic underpolling based off of purely statistical guesswork-- he didn't need to understand why the polls could have been (and ultimately were) biased for clinton, he just had to set up his model so it spit out that possibility on its own.

Based on the reality we live in, it's probably true that even Nate's estimate was wrong-- that it wasn't rolling a 5+ on the six-sided election day dice that gave trump the win, but that underlying factors put trump's win probability somewhere north of 50%. Given the data available though, Nate was the most effective poll-aggregator available.

Prediction markets are like a super-nate. Aside from each individual user having having access to all the same tools nate did (and the retrospective + incentive to use them), every vote on every market is a sort of poll, and all the people playing arbitrage force the markets to take into account conditional probabilities. They're still not going to be "right," all the time-- they're not even necessarily going to outperform your average pundit. But as a casual observer without inside knowledge, following the markets is a dominant strategy over basing your worldview on any particular pundit or basket of pundits. It's like the "always buy SPY" investment advice. Rare people, in rare cases, can consistently outperform the betting markets. But without some very convincing reasons, you shouldn't assume you're one of them.

I don’t disagree with you, my objection is really primarily aesthetic and partially on principle. A fund manager makes a bet, he doesn’t just decide that there’s a 70% chance of Boeing stock doubling in the next year, he gambles that it will. This counts as ‘making a decision’.

Nate is the actually worse than the bank research analyst who defends his buy rating on a stock that took a huge dive by saying that all he actually meant was that it would most likely do well, and then shrugs, because at least the latter has a position.

Nate should be confident enough, at least, to have a headline prediction that says “X is probably going to win”.

Sure, but this doesn't make sense in the context of prediction markets. Prediction markets host hundreds of predictions. We can look at the history of those predictions and see how well calibrated they are.

I don't believe the claim that prediction markets are "not accurate" would bear scrutiny.

Is probability even well-defined for a one-off event? It's not like we can random sample the multiverse on how the election actually went. At the same time, nothing is absolutely certain (supervolcano as October surprise!).

Maybe it makes sense from a Bayesian perspective: given the current knowledge of the system state (polls, voter registrations, demographics, maybe even volcanology reports) we can estimate the probability of a specific outcome. But a frequentist view seems nonsensical, even if a lot of predictions seem to present themselves that way.

One-off events are intractable. Kelly does not work on them.

I completely agree, the frequentist view is nonsensical. This is why forecasters need to be nailed down to a specific outcome (or ‘I don’t know / it’s too close to call’ but this has to be acknowledged as opting-out).

That's my main problem with Nate Silver's modelling.

There should be large error bars around the prediction that slowly close in as the predicted event approaches.

It shouldn't be "X% Trump, Y% Kamala," it should be "X% Trump, Y% Kamala, Z% irreducible uncertainty."

The logic is "if the election were held today then here's the probability." But... the elections won't be held today. That's the whole point of the prediction for a future event, and I think it behooves them to acknowledge that uncertainty is inherent to the modelling process.

If they'd included that back when it was Trump vs. Biden, the conserved probability would have accounted for Biden suddenly dropping out and wouldn't have broken the model instantly. Also helps reflect the chance that one of the candidates dies... which also almost happened.

And if Nate trusts his model, there's a ton of money to be made in the prediction markets.

It shouldn't be "X% Trump, Y% Kamala," it should be "X% Trump, Y% Kamala, Z% irreducible uncertainty."

What would this irreducible uncertainty mean for an event with a binary outcome? I think Silver already accounts for increasing uncertainty as he propagates his current prediction into the future (what he calls forecast vs. nowcast).

Error bars would make sense around the expected vote percentage. Of course the probability distribution over vote percentages becomes broader as you look into the future, and perhaps he does show that to paying customers. But in the end you still have to integrate over that when the layman asks for the probabilities of who wins the election. And that still amounts to two numbers that sum to 100%.

Evaluating a predictor's performance seems straightforward to me via the usual log-likelihood score. Record the final outcome and take the log of the predictor's probability for that outcome. That score can then be summed over multiple different elections, if you like. (Not sure though if I'd call that scoring rule particularly frequentist.)

But the outcome ISN'T really binary, is it?

Biden dropped out, Trump could have been killed by that bullet, and then we'd have a whole new ball game. The "Trump vs. Biden" model almost certainly didn't include a variable for "the Candidate abruptly drops out" and I doubt assassination risk was plugged in either.

And the fact that it tries to 'call' an election months out but has to adjust radically to new info is why I call it 'gimmicky.'

Taleb had his own discussion of this a while back, and this is the best summary of it I've found.

https://towardsdatascience.com/why-you-should-care-about-the-nate-silver-vs-nassim-taleb-twitter-war-a581dce1f5fc

More comments

There's not a ton of money to be made if you believe the odds are 50/50. Prediction markets give Trump 60/40 odds, while Nate's model gives 50/50 odds. If your bankroll is $1M, then it's only rational to bet 167k, for an expected value of 40k. Not nothing, but not a ton of money either.

That also ignores other costs, like counterparty risk. Nate also has to deal with reputational risk: people might value his published models less if they thought he was making bets on markets that were influenced by his models. Since that's his main source of actual income, a bet would be substantially negative EV for him.

There's reputational risk for having his model diverge too far from the prediction market's call, if the markets end up looking more accurate.

And I've seen him offer various bets before.

I like Nate generally, but I end up with the feeling that the Presidential Election model is a bit too gimmicky for my tastes. As stated, he should display some factor that accounts for the inherent uncertainty of a long-term prediction, rather than making confident-seeming prognostications which get aggressively revised as new information comes in.

He's not calling his shot well in advance, he's just adjusting to the same information everyone else gets as it comes in. Credit for the model being reasonable, but what new information is it giving us?

More comments

You have to look at their predictions in aggregate. If they predict 20 elections with a 95% chance for party A, and A wins 19 of those 20 elections, then yes they were accurate.

Even if that 1 election was a landslide for party B, the prediction method is accurate. People who say otherwise just aren’t accepting that it’s a percentage chance and not a poll.

The prediction markets, if anything, seem to be underselling Trump's chances right now.

I'd check out RealClearPolitics, which does a good job of aggregating all the polls. Trump is ahead in 6 of 7 swing states right now. Based on current polling averages, Trump wins 302 electoral votes. More importantly, polls are moving in his favor each day:

https://www.realclearpolling.com/elections/president/2024/battleground-states

Another data point. At this point in the campaign 8 years ago, Hilary was up by over 6.7 points nationally. Biden was up by 10 points. So we'd expect the polls to undersell Republican support on average. If the 2024 campaign follows the same trajectory as previous ones, Trump wins the popular vote by 3% and an electoral college landslide.

So, absent other information, I'd put Trump's odds at 70-80%. But I also know that I'm lacking information and fallible. I trust that the prediction markets are likely to be a truer reflection of the current state of the race than my opinions. There's actually a decent amount of liquidity in this particular market, with over $1 billion gambled, and a small bid ask spread of just 0.1%.

Swing states are so lumpy it's hard to call heads or tails on this.

While I fully agree with your general point and thrust of argument, particularly in overall polling differences compared to previous elections, the current leads in key states are still well within normal margins of error. We are in various cases talking about leads of 0.X% when a margin of error can be wide.

While I fully agree that based on historical patterns this would be a shoe in, there is a point that this assumes no changes in how polls were conducted between election cycles to try and improve their accuracy. There are many interests- commercial, strategic, and political-competitor- that have incentives to try and improve polling accuracy, and so it's not good to assume the same errors will continue to be repeated in the same way.

New equivalent errors may be introduced, and there are even conspiratorial takes on why polls may be wrong (such as presenting polls claiming a much closer race to support the effectiveness of future cheating by reducing the amount of cheating needed to plausibly 'narrowly' win), but these would have to be made and I don't think you or most other people are making them.

I generally think there's significantly more irreducible uncertainty out there than we like to acknowledge.

Even "margins of error" are just estimates (statistically sound, but still possible they're wrong) and actual outcomes can exceed them, rarely.

Sure. No disagreement, even. Consider this an assent.

...I'm not sure how else to add 'that is a sound and valid addition' without coming off as sillier than I mean to.

The prediction markets, if anything, seem to be underselling Trump's chances right now.

As much as I think the "Trump campaign is in disarray! They were not prepared for Kamala! Coconut-couchfucker-joy!" offensive was fake, I'll keep repeating "it's not over until it's over". Someone else also pointed out back then that relying on pollsters' past bias might be risky, because you never know when they might decide to correct for it.

Oh yeah, I'm with you. And we also can't escape the fact that many swing states have serious flaws in their election security.

A lot can still happen. I would expect a maximally damaging and fake news story to drop against Trump in the next few days. (50% chance). But on the plus side, Biden seems to hate Kamala so some of the levers that the current administration can pull (like sabre-rattling with Iran) won't get pulled.

But on the plus side, Biden seems to hate Kamala

Source?

See the recent Harris DeSantis Biden exchanges.

It was revealed to me in a dream

Low-effort comments lead to low-effort responses. Knock it off, all of you.

Do you think you're on reddit or something?

Okay, @stuckinbathroom's "Source?" is obnoxious, but so is this.

You want, like, a scientific study?

I’d take it if you had one.

But I’d settle for an interview, or even rumors like we had for Obama and Biden.

Don’t underestimate Joe’s ability to fuck it up.

You make it sound as if pollster bias is just a simple matter of them deciding not to correct for them, rather than them trying repeatedly to correct for it but reality being surprising in various ways.

Hofstadler's Law of polling mean there is always a shy Tory effect, even when you correct for Hofstadler's Law.

Right, but that doesn’t mean the pollsters aren’t trying to correct for it all the same.

Yes, and you seem to be implying there's something strange about that?

If the bias is consistently in the same direction, I find it unlikely that they are actually trying to correct it. I'd have to look up the post I'm citing, but I think they were talking about Sweden where their right-populist party was underestimated during one election, overestimated in the next one, and finally estimated correctly in the one after that. This is what you'd expect to see if they were trying.

How do you explain the pollster debate over polling methodologies if they’re not trying to correct for biases? Perhaps sometimes the biases are hard to correct for https://archive.is/6tjvT

The same way I explain debates over methodology in academia, which result in a peer review process that can't outperform laymen simply looking at studies' titles.

That does not imply a peer review process that can’t outperform laymen, because laypeople are only acting on the outputs of the peer review process. Moreover, a prediction performance of 67% may be much higher than chance, but there’s clearly a lot of signal still that laypeople cannot discern. You’d expect something different if they’re not trying at all.

I’ll go one further. I don’t think any poll is actually trying to figure out who will win so much as to convince the electorate of whatever the polling centers want to be true. There’s really no reason to bother with them other than to see if anything is changing within the narrative.

Polls are destructive tests: once you conduct one and announce the results, the value changes.

Which is kind of the point. If the point was just to see who might win, why publish the results? If the polls say Trump wins, then it’s useful perhaps in business where you might want to long term plan for the future economic policies Trump brings. Or it might be useful to the various campaigns as a signal of where the weak points are. I suspect that they aren’t getting the polls generally available to the public, which are not about reporting the likely winners, but in motivating or demotivating various factions in the electorate. CNN isn’t trying to guess the outcome. They want to scare democrats into voting and working harder for Kamala and saying she might lose is motivation for people who are afraid of a Trump second term. If they’re wrong, it’s not like they get a black eye even.

I think what makes more sense is to try to gage enthusiasm and whether or not some factions of the base are not on board. Kamala has a big problem because of Israel Palestine. There’s a fairly large portion of the left that’s jumping to either staying home or voting Green Party. If they’re serious, I think that’s a problem no matter what the polls say. I don’t see the same divide with any issues for Trump. I see lots of people saying they can’t wait to vote for Trump. Both things seem important as data points.

According to some polling at least, Israel/Palestine ranks rather low on voter priorities: https://substack-post-media.s3.amazonaws.com/public/images/eba2f5ad-57c0-4c7b-b546-296a1e273e06_1456x1241.png

If the point was just to see who might win, why publish the results?

Depending on the motivations of the pollster, I can imagine various reasons why they’d publicize accurate results (eg to advertise their polling outfit in case you want to hire them to poll on other issues of note). But I haven’t actually been able to find much about how public polls are funded and why. You?

Nate Silver has written about how the Red Wave that never manifested was in fact never well supported by the polling data and instead was a result of just such an overcorrection so there is at least some evidence in that direction.

Trump has had two Presidential elections so far. Even with no bias, you'd expect the error to be the same sign 50% of the time.

The dark and cynical but not quite CapitalRoom level of cynicism is that the pollsters have to keep the polls showing the possibility of a Harris victory to give the Democrats cover when they "find" enough ballots to put her over the top.

the pollsters have to keep the polls showing the possibility of a Harris victory to give the Democrats cover when they "find" enough ballots to put her over the top

This has been the theory put forth by some commenters over at the Dreaded Jim's blog.

I do have to say it's concerning that this election runs through several of our nation's most corrupt cities: Detroit, Milwaukee, Philadelphia, and Atlanta.

I won't make specific claims but it's the height of naivete to think these cities can run an election correctly.

What city exists that Republicans actually trust?

Until we get annexation of metropolitan areas it's just going to be like this.

Until we get annexation of metropolitan areas it's just going to be like this.

If you're advocating for this policy on the basis of culture-war reasons, prepare to be dissapointed. If you're advocating for this policy on the basis of being part of a non-urbanite interest group, prepare to be very dissapointed. In the short run you'd probably stand to benefit, which is why I as an urbanite would oppose you. But in the long run I think I'd get the last laugh.

The ultimate redpill is that none of this culture war stuff actually matters. It's all just cynical economic interest groups. The republicans are the rural party, the democrats are the urban party, and that's been true since they were called "Federalists" and "Democratic-republicans." And, historically, integrating provincial/national and metropolitan governments tends to benefit urbanites, not rurals. Consider the likely results of removing the electoral college as being illustrative. Or look at paris/france, rome/rome, vienna/austria, moscow/russia, etc.

Some states are disproportionately rural/suburban and to have their power balanced between multiple cities. In those cases, it's actually feasible for a rural/suburban coalition to partially dominate the urban areas. See: the missouri state government's control over Kansas City's police force. But that's ultimately a fragile equilibrium given anticipated climate-change driven migration from heavily urbanized coastal areas plus the new ideological YIMBY trend towards densification. Our future is destined to be more urban, not less-- even actual degrowth would hollow out suburbs and rural areas first. (See: what's happening in Japan.) Any effective attempt to oppress urbanites will just motivate people to move to rural/suburban areas and mold them in their image.

Ironically a republican success on immigration would only boost this trend. More homogenous cultures accommodate denser living-- the reverse of what caused the original white flight/suburbanization. It doesn't actually matter what that culture ultimately ends up being. Democrats would adapt to serve it, and then turn around to put their boots on ruralite necks.

More comments

There is still a big difference between the corrupt but functional cities (Boston, Seattle, NYC), and the corrupt dysfunctional cities (Detroit, Milwaukee, Atlanta).

Even here in deep blue Washington state, the government mostly still works. I trust the elections are mostly fair (modulo some Antifa ballot harvesting in Seattle races). In a place like Detroit, nothing works. How can they run an election fairly?

Fort Worth, Salt Lake City, and OKC would probably be trusted by most Republicans.

There is also the possibility that the underlying cause for the bias could have abated. Support for Trump can have normalised in poll answering demographics for instance.

I still find it likely that some underestimation is going on but I wouldn't be surprised if the poll aggregate is largely accurate or even overestimating Trump.