site banner

Culture War Roundup for the week of March 31, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

4
Jump in the discussion.

No email address required.

  1. LessWrong lead the charge on even considering the possibility of AI going badly, and that this was a concern to be taken seriously. The raison d'être for both OpenAI (initially founded as a non-profit to safely develop AGI) and especially Anthropic (founded by former OpenAI leaders explicitly concerned about the safety trajectory of large AI models). The idea that AGI is plausible, potentially near, and extremely dangerous was a core tenet that in those circles.

  2. Anthropic in particular is basically Rats/EAs, the company. Dario himself, Chris Olah, a whole bunch of others.

  3. OAI's initial foundation as a non-profit was using funds from Open Philanthropy, an EA/Rat charitable foundation. They received about $30 million, which meant something in the field of AI back in the ancient days of 2017. SBF, notorious as he is, was at the very least a self-proclaimed EA and invested a large sum in Anthropic. Dustin Moskovitz, the primary funder for Open Phil, lead initial investment into Anthropic. Anthropic President Daniela Amodei is married to former Open Philanthropy CEO Holden Karnofsky; Anthropic CEO Dario Amodei is her brother and was previously an advisor to Open Phil.

As for Open Phil itself, the best way to summarize is: Rationalist Community -> Influenced -> Effective Altruism Movement -> Directly Inspired/Created -> GiveWell & Good Ventures Partnership -> Became -> Open Philanthropy.

Note that I'm not claiming that Rationalists deserve all the credit for modern AI. Yet a claim that the link between them is as tenuous as that between ice cream and drowning is farcical. Any study of the aetiogenesis of the field that ignores Rat influence is fatally flawed.

I don't particularly see Less Wrong as having been important in popularising the idea that AI might be dangerous - come on, killer robot or killer AI stories have been prominent in popular culture for decades. Less Wrong launched in 2009. The film WarGames was from 1983, and it was hardly original at the time. The Terminator is from 1984. I Have No Mouth and I Must Scream is from 1967. 2001: A Space Odyssey is from 1968, based on stories from the 1950s. There are multiple Star Trek episodes about mad computers! It seems ridiculous to me to even suggest that Less Wrong led the charge on popularising the idea that AI could go badly. AI going badly is a cliché well over half a century old - it predates home computers!

Not that I think this even particularly matters, because as far as I can tell the AI safety movement has achieved very little, and perhaps more importantly, the goal of that movement is to slow down AI development, which seems like the opposite of what you gave the rationalists credit for.

More generally I am by no means surprised that lots of people in Silicon Valley are aware of rationalists, or even call themselves rationalists. What I'm questioning is whether there's a causal relationship between that and the development of AI or LLM technology. That may have been something that some of them believed, but so what? Perhaps being rationalist-inclined and developing AI are both downstream of some third factor (the summer, in the ice cream drowning example). They seem to me both plausibly downstream of being analytical computer-inclined nerds raised on a diet of science fiction, for instance. It's just all part of the same culture.

100%. I'd add that "AI going bad" arguably predates the computer as a trope, with Frankenstein unambiguously serving as a model for "humans create cool modern scientific innovation that thinks for itself and turns on them" and I am pretty sure that Frankenstein isn't even the oldest example of that trope, just a particularly notable one.

I was struck, thinking about it for this, by just how diverse the genre is?

You have the classic 'killer robot' trope, where the machines are just plain evil and intentionally want to destroy humanity - thus Skynet or AM.

You have the machine that is faithfully executing the commands given to it in good faith and threatens to destroy everything out of ignorance - thus WOPR.

You have the machine that is attempting to fulfil its designed purpose in good faith but which suffers some kind of fatal error and goes crazy - thus HAL 9000.

You have the machines that genuinely want the best for humanity and try to achieve that even contrary to our explicitly stated preferences - think 'With Folded Hands' (1947), or Asimov played around with this. 'The Evitable Conflict' (1950) was about machines taking charge of the future with humanity's welfare in mind, and seems ambivalent about whether that's desirable.

It seems like these categories cover most plausible AI fears. The AI could be actively hostile to humans, the AI could be indifferent to or ignorant of human life, the AI could be schizophrenic or malfunctioning, and the AI could be benevolent in ways that we do not desire.

Obviously none of these stories map perfectly to contemporary worries, but there's enough, I think, that the concept of AI or robots or machines going wrong in a dangerous way was firmly stuck in the public consciousness long before an autodidact started a blog in 2009.

Absolutely. For fun I'd even add the AI in Alien (1979), which is programmed perfectly to serve its masters but by that very token is indifferent to its fellow humans and even its own survival in a way a rational human would not be.

Oh, and to that I should add works like Blade Runner or Do Androids Dream of Electric Sheep?, or even Frank Herbert's original concept for the Butlerian Jihad, where even perfectly well-behaved thinking machines might challenge what it means to be human metaphysically. Even that has been considered potentially existentially threatening. It's not literal destruction, but what if machines change our very concept of what it means to be alive, or to have a soul?

Less Wrong asked some of these questions in the 2010s, but then, so did Mass Effect. It's a genre staple.