site banner

Culture War Roundup for the week of March 31, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

4
Jump in the discussion.

No email address required.

I don't follow the AI developments terribly closely, and I'm probably missing a few IQ points to be able to read all the latest papers on the subjects like Dase does, so I could be misremembering / misunderstanding something, but from what I heard capital 'R' Rationalism has had very little to do with it, beyond maybe inspiring some of the actual researchers and business leaders.

Yudkowsky himself? He's best described as an educator and popularizer. He's hasn't done much in terms of practical applications, beyond founding MIRI, which is a bit player. But right now, leaders of AI labs use rationalist shibboleths, and some high ranking researchers like Neel Nanda, Paul Christiano and Jan Leke (and Ryan Moulton too, he's got an account here to boot) are all active users on LessWrong.

The gist of it is that the founders and early joiners of the big AI labs were strongly motivated by their beliefs in the feasibility of creating superhuman AGI, and also their concern that there would be a far worse outcome if someone else, who wasn't as keyed into concerns about misalignment was the first to go through.

As for building god, I think I heard that story before, and I believe it's proper ending involves striking the GPU cluster with a warhammer, followed by several strikes with a shortsword. Memes aside, it's a horrible idea, and if it's successful it will inevitably be used to enslave us

You'll find that members of the Rationalist community are more likely to share said beliefs than the average population.

Yud had a whole institute devoted to studying AI, and he came up with nothing practical. From what I heard, the way the current batch of AIs work has nothing to do it with what he was predicting, he just went "ah yes, this is exactly what I've been talking about all these years" after the fact.

Yudkowsky is still more correct than 99.9999% of the global population. He did better than most computer scientists and the few ML researchers around then. He correctly pointed out that you couldn't just expect that a machine intelligence would come out following human values (he also said that it would understand them very well, it just wouldn't care, it's not a malicious or naive genie). Was he right about the specifics, such as neural networks and the Transformer architecture that blew this wide open? He didn't even consider them, but almost nobody really did, until they began to unexpectedly show promise.

I repeat, just predicting that AI would reach near-human intelligence (not that they're not already superintelligent in narrow domains) before modern ML is a big deal. He's on track when it comes to being right that they won't stop there, human parity is not some impossible barrier to breach. Even things like recursive self-improvement are borne out by things like synthetic data and teacher-student distillation actually working well.

In any case when I bring up rationalism's failure, I usually mean it's broader promises of transcending tribalism, systematized winning, raising the sanity waterline, and making sense of the world. In all of these, it has failed utterly.

Anyone who does really well in a consistent manner is being rational in a way that matters. There are plenty of superforecasters and Quant nerds who make bank on being smarter and more rational given available information than the rest of us. They just don't write as many blog posts. They're still applying the same principles.

Making sense of the world? The world makes pretty good sense all considered.

It makes sense, because my feelings toward rationalism and transhumanism are quite similar. Irreconcilable value differences are irreconcilable, though funnily enough mist transhumanists, yourself included, seem like decent blokes.

Goes both ways. I'm sure you're someone I can talk to over a beer, even if we vehemently disagree on values.

(The precise phrase "irreconcilable values difference" is a Rationalist one, it's in the very air we breathe, we've adopted their lingo)

Others already pointed out how none of the insights you credit Rationalists with are unique to them, nor were they the first ones, so I'll skip over that.

You'll find that members of the Rationalist community are more likely to share said beliefs than the average population.

This is only true to the extent that their primary goal is not letting anyone else have the AI-god. Their preferred outcome is still for AI to exist, they just want it to be 100% under control of people with Rationalist values. So while there exists a set of circumstances where I might end up allying with them, their actual goals are one of my nightmare scenarios, and I'm much more aligned with the average population on this issue.

Anyone who does really well in a consistent manner is being rational in a way that matters.

But they're not (necessarily) being Rationalist, or following Enlightenment principles.

The precise phrase "irreconcilable values difference" is a Rationalist one

I'm pretty sure that the first time I heard it, I was but wee little lad playing with my toys in the living room, overhearing what my parents were watching on the TV, and some talking heads dropping the phrase in the context of divorce. I doubt they got it from Rationalists.

Others already pointed out how none of the insights you credit Rationalists with are unique to them, nor were they the first ones, so I'll skip over that.

They were directly responsible for the promulgation of those concepts and popularizing them, first in the tech sphere, and then just about globally.

The man who caused a flash of light when he accidentally shorted a primitive battery isn't credited with the invention of the lightbulb, the person who made them commercially viable is.

This is only true to the extent that their primary goal is not letting anyone else have the AI-god. Their preferred outcome is still for AI to exist, they just want it to be 100% under control of people with Rationalist values. So while there exists a set of circumstances where I might end up allying with them, their actual goals are one of my nightmare scenarios, and I'm much more aligned with the average population on this issue

Religious people seem to believe that a God exists (and the major strains think that this entity is somehow omnipotent, omniscient and omnibenevolent). Those who don't, think that something even approaching those values is a Good Thing.

The majority of Rats don't think an aligned ASI is strictly necessary for eudaimonia, but it sure as hell helps.

Besides, the only actual universal trait required to be a rationalist is to highly value the art of rationality and to seek to apply. You don't have to be a Rat to be rational, anyone who has made a budget is trying to be rational.

But they're not (necessarily) being Rationalist, or following Enlightenment principles.

Which is fine. I'm not contesting that. As I said, you don't have to be a card-carrying rationalist to be rational. They just think it's a topic worth formal analysis.

I'm pretty sure that the first time I heard it, I was but wee little lad playing with my toys in the living room, overhearing what my parents were watching on the TV, and some talking heads dropping the phrase in the context of divorce. I doubt they got it from Rationalists.

"Irreconcilable differences" is a phrase that's been around for a while, with the most obvious application being in a legal context. The values bit is a rationalist shibboleth.

Yudkowsky himself? He's best described as an educator and popularizer. He's hasn't done much in terms of practical applications, beyond founding MIRI, which is a bit player. But right now, leaders of AI labs use rationalist shibboleths, and some high ranking researchers like Neel Nanda, Paul Christiano and Jan Leke (and Ryan Moulton too, he's got an account here to boot) are all active users on LessWrong.

That the rationalist subculture is something that some people in the tech industry are also into by no means means that rationalists can take credit for AI companies.

(Though frankly why you would want to is beyond me - "is responsible for AI" is something that lowers my estimation of someone, rather than raises it.)

You presented a genetic or causal relationship:

You believe that the Rationalist movement is an "utter failure", when it has spawned the corporations busy making God out of silicon.

But the fact that some people are both rationalists and work at AI companies does not show that rationalists are the reason those companies exist - "rationalists caused AI" is of the same order as "ice cream causes drowning".

  1. LessWrong lead the charge on even considering the possibility of AI going badly, and that this was a concern to be taken seriously. The raison d'être for both OpenAI (initially founded as a non-profit to safely develop AGI) and especially Anthropic (founded by former OpenAI leaders explicitly concerned about the safety trajectory of large AI models). The idea that AGI is plausible, potentially near, and extremely dangerous was a core tenet that in those circles.

  2. Anthropic in particular is basically Rats/EAs, the company. Dario himself, Chris Olah, a whole bunch of others.

  3. OAI's initial foundation as a non-profit was using funds from Open Philanthropy, an EA/Rat charitable foundation. They received about $30 million, which meant something in the field of AI back in the ancient days of 2017. SBF, notorious as he is, was at the very least a self-proclaimed EA and invested a large sum in Anthropic. Dustin Moskovitz, the primary funder for Open Phil, lead initial investment into Anthropic. Anthropic President Daniela Amodei is married to former Open Philanthropy CEO Holden Karnofsky; Anthropic CEO Dario Amodei is her brother and was previously an advisor to Open Phil.

As for Open Phil itself, the best way to summarize is: Rationalist Community -> Influenced -> Effective Altruism Movement -> Directly Inspired/Created -> GiveWell & Good Ventures Partnership -> Became -> Open Philanthropy.

Note that I'm not claiming that Rationalists deserve all the credit for modern AI. Yet a claim that the link between them is as tenuous as that between ice cream and drowning is farcical. Any study of the aetiogenesis of the field that ignores Rat influence is fatally flawed.

I don't particularly see Less Wrong as having been important in popularising the idea that AI might be dangerous - come on, killer robot or killer AI stories have been prominent in popular culture for decades. Less Wrong launched in 2009. The film WarGames was from 1983, and it was hardly original at the time. The Terminator is from 1984. I Have No Mouth and I Must Scream is from 1967. 2001: A Space Odyssey is from 1968, based on stories from the 1950s. There are multiple Star Trek episodes about mad computers! It seems ridiculous to me to even suggest that Less Wrong led the charge on popularising the idea that AI could go badly. AI going badly is a cliché well over half a century old - it predates home computers!

Not that I think this even particularly matters, because as far as I can tell the AI safety movement has achieved very little, and perhaps more importantly, the goal of that movement is to slow down AI development, which seems like the opposite of what you gave the rationalists credit for.

More generally I am by no means surprised that lots of people in Silicon Valley are aware of rationalists, or even call themselves rationalists. What I'm questioning is whether there's a causal relationship between that and the development of AI or LLM technology. That may have been something that some of them believed, but so what? Perhaps being rationalist-inclined and developing AI are both downstream of some third factor (the summer, in the ice cream drowning example). They seem to me both plausibly downstream of being analytical computer-inclined nerds raised on a diet of science fiction, for instance. It's just all part of the same culture.

100%. I'd add that "AI going bad" arguably predates the computer as a trope, with Frankenstein unambiguously serving as a model for "humans create cool modern scientific innovation that thinks for itself and turns on them" and I am pretty sure that Frankenstein isn't even the oldest example of that trope, just a particularly notable one.

I was struck, thinking about it for this, by just how diverse the genre is?

You have the classic 'killer robot' trope, where the machines are just plain evil and intentionally want to destroy humanity - thus Skynet or AM.

You have the machine that is faithfully executing the commands given to it in good faith and threatens to destroy everything out of ignorance - thus WOPR.

You have the machine that is attempting to fulfil its designed purpose in good faith but which suffers some kind of fatal error and goes crazy - thus HAL 9000.

You have the machines that genuinely want the best for humanity and try to achieve that even contrary to our explicitly stated preferences - think 'With Folded Hands' (1947), or Asimov played around with this. 'The Evitable Conflict' (1950) was about machines taking charge of the future with humanity's welfare in mind, and seems ambivalent about whether that's desirable.

It seems like these categories cover most plausible AI fears. The AI could be actively hostile to humans, the AI could be indifferent to or ignorant of human life, the AI could be schizophrenic or malfunctioning, and the AI could be benevolent in ways that we do not desire.

Obviously none of these stories map perfectly to contemporary worries, but there's enough, I think, that the concept of AI or robots or machines going wrong in a dangerous way was firmly stuck in the public consciousness long before an autodidact started a blog in 2009.

Absolutely. For fun I'd even add the AI in Alien (1979), which is programmed perfectly to serve its masters but by that very token is indifferent to its fellow humans and even its own survival in a way a rational human would not be.

Oh, and to that I should add works like Blade Runner or Do Androids Dream of Electric Sheep?, or even Frank Herbert's original concept for the Butlerian Jihad, where even perfectly well-behaved thinking machines might challenge what it means to be human metaphysically. Even that has been considered potentially existentially threatening. It's not literal destruction, but what if machines change our very concept of what it means to be alive, or to have a soul?

Less Wrong asked some of these questions in the 2010s, but then, so did Mass Effect. It's a genre staple.