site banner

Culture War Roundup for the week of March 24, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

4
Jump in the discussion.

No email address required.

Why hasn't it already?

In my opinion, it hasn't because (contrary to what AI hype proponents say) it can't. AI simply isn't very good at doing things yet. To use the specific example I know well and actually have interacted with, LLMs don't write good code. It has wildly inaccurate bits that you have to check up on, sometimes to the point that it isn't even syntactically valid. It actually slows you down in many cases to try to use LLMs for programming. A skilled programmer can use AI tools as a force multiplier in some situations, so they do have a (fairly narrow) use case. But the idea that you could replace programmers with LLMs is just plain laughable at this stage of the game.

I'm not an expert in every field. But given that AI is not actually very good for coding, one of the things its proponents claim it to be good at... I don't exactly have high hopes that AI is good at those other things either. Maybe it'll get there, but there's not sufficient reason to believe "yes it will definitely happen" just yet. We have no way of knowing whether the rate of progress from the last few years will continue, or whether we are going to hit an unforseen wall that blocks all progress. We'll just have to wait and see.

So, I think that is why the great AI replacement hasn't occurred. It isn't able to successfully happen yet. At best, right now you would replace humans with AI that does the job extremely poorly, and then (in a few years, when the hype dies down) humans would get hired back to fix all the stuff the AI broke. Which is a distinct possibility, as that is what happened a couple of decades ago with outsourcing jobs to India. But as painful as that would be, it's not "all these human jobs are over now".

It actually slows you down in many cases to try to use LLMs for programming.

For an example of this happening literally right now, see ThePrimeagen and other Youtubers spending a full week streaming themselves making a tower defense game through "vibe coding." Prime and the other streamers he's working with are all talented and knowledgeable devs, but what they're making is an absolute mess. They (or two or three decently competent devs at a weekend game jam) could make the same or a better game in a fraction of the time if they were coding directly instead of using an AI to do it. And the amount of work they have to do to fix the AI's messes are way more than they'd need to do to just make the damn game themselves.

Was it on the motte that I saw this joke again recently? It feels appropriate though.

A guy is walking through the park when he comes across a chess table with a man seated on one side and a dog seated on the other. The man stops to watch them and he is astounded to see the dog is actually playing! He professes his astonishment to the man "your dog is amazing, I can't believe he can play chess!" The man snorts however, and turns to him with a sneer, "Amazing? Amazing nothing, I still beat him nine times out of 10."

I think it's amazing that we can even consider getting a computer to write a game for us, having grown up in the era where you had to write a game before you could play it (unless you were wealthy enough to own a cassette player).

It was on the motte that I replied to this joke:

Beware fictional evidence.

The joke works because we have assumptions about what it means to be able to play chess, and we know that a dog playing chess with any significant chance of success implies a much greater jump in intelligence than the jump between playing poorly and playing well.

If the dog was playing chess using some method that was not like how humans play chess, and which couldn't generalize to being able to play well, the joke wouldn't be very funny. Of course there isn't such a method for chess-playing dogs. But we know that Claude doesn't play Pokemon like humans do, and this may very well not generalize to playing as well as a human.

(Notice that your assumptions are wrong for computers playing chess. My Gameboy can beat me in chess. It has no chance of taking over the world.)

If the dog was playing chess using some method that was not like how humans play chess, and which couldn't generalize to being able to play well, the joke wouldn't be very funny.

Humor is subjective and all that, but I don't understand this perspective. I'd find the joke exactly as funny no matter what way the dog was playing chess, whether it was thinking through its moves like a human theoretically is, or, I dunno, moving pieces by following scents that happened to cause its nose to push pieces in ways that followed the rules and was good enough to defeat a human player at some rate greater than chance. The humor in the joke to me comes from the player downplaying this completely absurd super-canine ability the dog has, and that ability remains the same no matter how the dog was accomplishing this, and no matter if it wouldn't imply any sort of general ability for the dog to become better at chess. Simply moving the pieces in a way that follows the rules most of the time would already be mindblowingly impressive for a dog, to the extent that the joke would still be funny.

The humor in the joke to me comes from the player downplaying this completely absurd super-canine ability the dog has...

It's the same basic idea: we already know how hard it is to play chess and it's far more than a dog can normally do. And it's this knowledge which makes the joke a joke.

The joke isn't a scenario where the dog plays chess under such unusual circumstances that it doesn't mean the dog is smart.

And imagine that it's 1981 and someone is showing you their brand new ZX81. The exact same thing happens that happens with the dog, down to you saying that the chess program can be beaten nine times out of ten. Should you conclude that actually, ZX81s are really really smart because playing chess at all is impressive? Or should you conclude that even though humans use general intelligence to play chess, the ZX81 instead uses a very nonhuman specialized method, and the ZX81 isn't very smart despite how impressive playing chess is?

If a few years later the ZX81 was replaced with a Commodore 64, and you couldn't beat the Commodore 64 in chess, would you decide that the ZX81 is dumb, but the Commodore 64 is smart?

The joke isn't a scenario where the dog plays chess under such unusual circumstances that it doesn't mean the dog is smart.

I don't think it would make sense for a dog to be able to play chess at all while also that not meaning that the dog is "smart" in some real sense. Perhaps it doesn't understand the rules of chess or the very concept of a competitive board game, but if it's able to push around the pieces on the board in a way that conforms to the game's rules in a manner that allows it to defeat humans (who are presumably competent at chess and genuinely attempting to win) some non-trivial percentage of the time through its own volition without some marionette strings or external commands or something, I would characterize that dog as "smart." Perhaps the dog had an extra smart trainer, but I doubt that even an ASI-level smart trainer could train the smartest real-life dog in the real world to that level.

And imagine that it's 1981 and someone is showing you their brand new ZX81. The exact same thing happens that happens with the dog, down to you saying that the chess program can be beaten nine times out of ten. Should you conclude that actually, ZX81s are really really smart because playing chess at all is impressive? Or should you conclude that even though humans use general intelligence to play chess, the ZX81 instead uses a very nonhuman specialized method, and the ZX81 isn't very smart despite how impressive playing chess is?

This last sentence doesn't make sense to me either. Yes, I would conclude that the ZX81 uses a very nonhuman specialized method, and I'd characterize its "ability" (obviously unlike a dog, it has no agency) to play chess in this way as "smart" in some real, meaningful sense. Obviously it's not any sort of generalized "smartness" that can apply to any other situation. If we were living at a time when a computer that could play chess wasn't even a thing, and someone introduced me to a chess bot that he could defeat only 9 times out of 10, I would find it funny if he downplayed that, as in the dog joke.

If a few years later the ZX81 was replaced with a Commodore 64, and you couldn't beat the Commodore 64 in chess, would you decide that the ZX81 is dumb, but the Commodore 64 is smart?

I'd conclude that the Commodore 64 is "smarter than" the ZX81 (I'm assuming we're using the computer names as shorthand for the software that they actually run on the hardware, here). Again, not in some sort of generalized sense, but certainly in a real, meaningful sense in the realm of chess playing.

When it comes to actual modern AI, we're, of course, talking primarily about LLMs, which generate text really really well, so it could be considered "smart" in that one realm. I'm on the fence about and mostly skeptical that LLMs will or can be the basis for an AGI in the future. But I think it's a decent argument that strings of text can be translated to almost any form of applied intelligence, and so by becoming really, really good at putting together strings of text, LLMs could be used as that basis for AGI. I think modern LLMs are clearly nowhere near there, with Claude Plays Pokemon the latest really major example of its failures, from what I understand. We might have to get to a point where the gap between the latest LLM and ChatGPT4.5 is greater than the gap between ChatGPT4.5 and ELIZA before that happens, but I could see it happening.

I'd characterize its "ability" (obviously unlike a dog, it has no agency) to play chess in this way as "smart" in some real, meaningful sense. Obviously it's not any sort of generalized "smartness" that can apply to any other situation.

The point of the dog analogy is that the dog that plays chess poorly is impressive, because being able to play at all is the biggest and hardest step, and being able to play well is a relatively small step from that.

The LLM version would be that it's almost as impressive for an LLM to generate text poorly as it is for an LLM to generate text well.

I don't think that's true.

I didn't think that was the point of the dog analogy, but if that were, then indeed, you're right it's a poor analogy for this.

You're right, it is amazing that we can even consider that. I don't think anyone disagrees on that point. The disagreement here is that our resident AI hype posters keep going past that, and claim that AI will be able to outshine us in the very near future. It's possible, as I said in my other comment. But we simply are not there yet, and we (imo) don't yet have reason to believe we will be there real soon. That is the point of disagreement, and why people sound so skeptical about something which is nigh-objectively an amazing technical achievement. It's because they are responding to very overblown claims about what the achievement is capable of.

But why do you think it's so far off? I get that it isn't there yet, but that's not in any way an argument for it not coming soon. And that always seems to be the primary focus of the skeptic side, while the believers either wallow in existential crisis or evangelise about the future. And I know the believers "it's coming, I promise" isn't any better from an evidence standpoint, but it is what I believe so I've got to put forward my best case. And the continually accelerating path of modern technology over my lifetime is it.

Eta for the record my position is ai will radically change civilisation within the next 15 years.

Because right now we're not even close to AI being able to equal humans, let alone exceed them. And because this is cutting edge research, we simply cannot know what roadblocks might occur between now and then. To me, the correct null hypothesis is "it won't happen soon" until such time as there is a new development which pushes things forward quite a bit.

Seems like you're just begging the question here. Why is that the correct null hypothesis?

I don't see how it's begging the question at all. Why shouldn't it be the null hypothesis, rather than the claim that we will see AI eclipse humans soon? Why is it begging the question when I do it, but not when someone else chooses a different theory? I'm willing to agree that the choice of "what is the appropriate null hypothesis" is not one which can be proven to be correct, which is why I said "to me" the correct null hypothesis is that we won't see that soon. But I'm not willing to agree that I'm committing some kind of epistemological sin here.

I'm hoping you'd providing arguments or evidence about the likelihood of different outcomes. I'm not sure what calling something a null hypothesis means other than being a bald assertion that it's likely.

I'll go first: rapid recent improvements in AI continuously over the last 12 years and massive R&D efforts going on make it likely that substantial improvements will continue.

More comments

I would claim this as my joke, and it was probably my comment you recall, but it's been in circulation for probably longer than I've been alive. It's a good joke, stabs right at the gut.

Yeah that's right, it was one of your other posts on ai. There's something in the zeitgeist demanding a resurgence of good old jokes at the moment, I've heard a lot of classics retold recently. It's nice, I missed the structured joke format as a cultural touchstone.

In my opinion, it hasn't because (contrary to what AI hype proponents say) it can't.

Yes, I lean towards thinking that AI is often overblown, but at least part of my point here is that probably a lot more automation was possible even prior to AI than has actually been embraced so far. Just because something is possible does not mean that it will be implemented, or implemented quickly.

A skilled programmer can use AI tools as a force multiplier in some situations, so they do have a (fairly narrow) use case.

I think this is pretty analogous to my experience with it (which doesn't involve programming). Force multiplier, yes, definitely. But so is Excel. And what happened with Excel wasn't that accountants went out of business, but rather that (from what I can tell, anyway) fairly sophisticated mathematical operations and financial monitoring became routine.