site banner

Culture War Roundup for the week of March 24, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

4
Jump in the discussion.

No email address required.

If the dog was playing chess using some method that was not like how humans play chess, and which couldn't generalize to being able to play well, the joke wouldn't be very funny.

Humor is subjective and all that, but I don't understand this perspective. I'd find the joke exactly as funny no matter what way the dog was playing chess, whether it was thinking through its moves like a human theoretically is, or, I dunno, moving pieces by following scents that happened to cause its nose to push pieces in ways that followed the rules and was good enough to defeat a human player at some rate greater than chance. The humor in the joke to me comes from the player downplaying this completely absurd super-canine ability the dog has, and that ability remains the same no matter how the dog was accomplishing this, and no matter if it wouldn't imply any sort of general ability for the dog to become better at chess. Simply moving the pieces in a way that follows the rules most of the time would already be mindblowingly impressive for a dog, to the extent that the joke would still be funny.

The humor in the joke to me comes from the player downplaying this completely absurd super-canine ability the dog has...

It's the same basic idea: we already know how hard it is to play chess and it's far more than a dog can normally do. And it's this knowledge which makes the joke a joke.

The joke isn't a scenario where the dog plays chess under such unusual circumstances that it doesn't mean the dog is smart.

And imagine that it's 1981 and someone is showing you their brand new ZX81. The exact same thing happens that happens with the dog, down to you saying that the chess program can be beaten nine times out of ten. Should you conclude that actually, ZX81s are really really smart because playing chess at all is impressive? Or should you conclude that even though humans use general intelligence to play chess, the ZX81 instead uses a very nonhuman specialized method, and the ZX81 isn't very smart despite how impressive playing chess is?

If a few years later the ZX81 was replaced with a Commodore 64, and you couldn't beat the Commodore 64 in chess, would you decide that the ZX81 is dumb, but the Commodore 64 is smart?

The joke isn't a scenario where the dog plays chess under such unusual circumstances that it doesn't mean the dog is smart.

I don't think it would make sense for a dog to be able to play chess at all while also that not meaning that the dog is "smart" in some real sense. Perhaps it doesn't understand the rules of chess or the very concept of a competitive board game, but if it's able to push around the pieces on the board in a way that conforms to the game's rules in a manner that allows it to defeat humans (who are presumably competent at chess and genuinely attempting to win) some non-trivial percentage of the time through its own volition without some marionette strings or external commands or something, I would characterize that dog as "smart." Perhaps the dog had an extra smart trainer, but I doubt that even an ASI-level smart trainer could train the smartest real-life dog in the real world to that level.

And imagine that it's 1981 and someone is showing you their brand new ZX81. The exact same thing happens that happens with the dog, down to you saying that the chess program can be beaten nine times out of ten. Should you conclude that actually, ZX81s are really really smart because playing chess at all is impressive? Or should you conclude that even though humans use general intelligence to play chess, the ZX81 instead uses a very nonhuman specialized method, and the ZX81 isn't very smart despite how impressive playing chess is?

This last sentence doesn't make sense to me either. Yes, I would conclude that the ZX81 uses a very nonhuman specialized method, and I'd characterize its "ability" (obviously unlike a dog, it has no agency) to play chess in this way as "smart" in some real, meaningful sense. Obviously it's not any sort of generalized "smartness" that can apply to any other situation. If we were living at a time when a computer that could play chess wasn't even a thing, and someone introduced me to a chess bot that he could defeat only 9 times out of 10, I would find it funny if he downplayed that, as in the dog joke.

If a few years later the ZX81 was replaced with a Commodore 64, and you couldn't beat the Commodore 64 in chess, would you decide that the ZX81 is dumb, but the Commodore 64 is smart?

I'd conclude that the Commodore 64 is "smarter than" the ZX81 (I'm assuming we're using the computer names as shorthand for the software that they actually run on the hardware, here). Again, not in some sort of generalized sense, but certainly in a real, meaningful sense in the realm of chess playing.

When it comes to actual modern AI, we're, of course, talking primarily about LLMs, which generate text really really well, so it could be considered "smart" in that one realm. I'm on the fence about and mostly skeptical that LLMs will or can be the basis for an AGI in the future. But I think it's a decent argument that strings of text can be translated to almost any form of applied intelligence, and so by becoming really, really good at putting together strings of text, LLMs could be used as that basis for AGI. I think modern LLMs are clearly nowhere near there, with Claude Plays Pokemon the latest really major example of its failures, from what I understand. We might have to get to a point where the gap between the latest LLM and ChatGPT4.5 is greater than the gap between ChatGPT4.5 and ELIZA before that happens, but I could see it happening.

I'd characterize its "ability" (obviously unlike a dog, it has no agency) to play chess in this way as "smart" in some real, meaningful sense. Obviously it's not any sort of generalized "smartness" that can apply to any other situation.

The point of the dog analogy is that the dog that plays chess poorly is impressive, because being able to play at all is the biggest and hardest step, and being able to play well is a relatively small step from that.

The LLM version would be that it's almost as impressive for an LLM to generate text poorly as it is for an LLM to generate text well.

I don't think that's true.

I didn't think that was the point of the dog analogy, but if that were, then indeed, you're right it's a poor analogy for this.