site banner

Culture War Roundup for the week of April 7, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

3
Jump in the discussion.

No email address required.

The future of AI will be dumber than we can imagine.

Yes. This is part of what I meant when I was talking about the utter failure of the Rationalist movement with @self_made_human recently. The Rats invested essentially 100% of their credibility in a single issue, trying to position themselves as experts in "safety", and not only do they come up with the most ridiculous scenario for risk, they ignore the most obvious ones, and even promote their acceleration!

Decentralization is a virtue here.

This is blasphemy to the Rationalist. It's not even a question of whether the AI will be safe when decentralized or not, for them the whole point of achieving AGI is achieving total control of humanity's minds and souls.

This is blasphemy to the Rationalist. It's not even a question of whether the AI will be safe when decentralized or not, for them the whole point of achieving AGI is achieving total control of humanity's minds and souls.

Have any examples of a rationalist expressing this opinion?

I'd need to reread the thing, but I believe Meditations on Moloch had a bit about elevating AI to godhood, so that it can cultivate """human""" values. And there's also Samsara, a "hee hee, just kidding" story about mindfucking the last guy on the planet that dares to have a different opinion.

I found the section of Moloch and my impression is that it's more of a hypothetical used as rhetorical device for showing the magnitude of the problem of "traps" than a seriously proposed solution:

So let me confess guilt to one of Hurlock’s accusations: I am a transhumanist and I really do want to rule the universe.

Not personally – I mean, I wouldn’t object if someone personally offered me the job, but I don’t expect anyone will. I would like humans, or something that respects humans, or at least gets along with humans – to have the job.

But the current rulers of the universe – call them what you want, Moloch, Gnon, whatever – want us dead, and with us everything we value. Art, science, love, philosophy, consciousness itself, the entire bundle. And since I’m not down with that plan, I think defeating them and taking their place is a pretty high priority.

The opposite of a trap is a garden. The only way to avoid having all human values gradually ground down by optimization-competition is to install a Gardener over the entire universe who optimizes for human values.

And the whole point of Bostrom’s Superintelligence is that this is within our reach. Once humans can design machines that are smarter than we are, by definition they’ll be able to design machines which are smarter than they are, which can design machines smarter than they are, and so on in a feedback loop so tiny that it will smash up against the physical limitations for intelligence in a comparatively lightning-short amount of time. If multiple competing entities were likely to do that at once, we would be super-doomed. But the sheer speed of the cycle makes it possible that we will end up with one entity light-years ahead of the rest of civilization, so much so that it can suppress any competition – including competition for its title of most powerful entity – permanently. In the very near future, we are going to lift something to Heaven. It might be Moloch. But it might be something on our side. If it’s on our side, it can kill Moloch dead.

And if that entity shares human values, it can allow human values to flourish unconstrained by natural law.

I realize that sounds like hubris – it certainly did to Hurlock – but I think it’s the opposite of hubris, or at least a hubris-minimizing position.

To expect God to care about you or your personal values or the values of your civilization, that’s hubris.

To expect God to bargain with you, to allow you to survive and prosper as long as you submit to Him, that’s hubris.

To expect to wall off a garden where God can’t get to you and hurt you, that’s hubris.

To expect to be able to remove God from the picture entirely…well, at least it’s an actionable strategy.

I am a transhumanist because I do not have enough hubris not to try to kill God.

Perhaps Scott genuinely believes human-aligned ASI is the least-bad solution to Moloch and solving Moloch is a sufficient motivation to risk mis-aligned ASI, but if "the whole point of achieving AGI is achieving total control of humanity's minds and souls," the question of alignment wouldn't make much sense; the ASI could be assumed to be better aligned to transhumanist terminal goals than transhumanists, due to being definitionally superior.

"Samsara" is a terrific example of Scott's fiction, but I think it being a friendly joke from a comedic short fiction author who's fond of Buddhism is a much better interpretation than it revealing a latent desire to control the minds of those who disagree with him - if it were the latter, what latent desire would Current Affairs’ “Some Puzzles For Libertarians”, Treated As Writing Prompts For Short Stories reveal?

I think there is a genuine spiritual vision to 'Moloch' - it's the same one in 'The Goddess of Everything Else' and even to an extent in 'Wirehead Gods on Lotus Thrones'. It's a vision that sees nature as cruel, ruthless, and arbitrary, and which exalts rather in its replacement by conscious organisation in the interests of consciousness. Or at least, in the interests of intelligence, since I think the rationalists have a very minimal (I would say impoverished) definition of consciousness as such. There was a tagline on an old rationalist blog - was it Ozy's? - that I felt summed up this religion well: "The gradual replacement of the natural with the good".

AI-god naturally fits very well into that vision. It is a constructed super-agent that, unlike the messy products of evolution, might be trusted to align with the vision itself. It is a technological avatar of rationalist values - there's a reason why 'alignment' is such a central word in rationalist AI discourse. It's an elevated means by which reality may conform to our vision, which obliterates resistance or friction to it.

(This should be for another post, but I have thoughts about the importance of resistance or friction in a good life...)

'Samsara', on the other hand, is a one-off joke, though for me I think the deepest joke it tells is actually one on Scott. 'Samsara' to me reads fairly typically of rationalist understanding of Buddhism, which is intensely surface level. I know that it's a joke so I'm not going to jump on it for the world full of people in orange robes reciting clichéd koans, but it reminds me a lot of Daniel Ingram's book, and in that way, why neither Scott nor Ingram have a clue about Buddhism. What I mean is that their approach to Buddhism is fundamentally subtractive - it's about removing millennia of tradition to try to crystallise a single fundamental insight. The premise of 'Samsara' is:

Twenty years ago, a group of San Francisco hippie/yuppie/techie seekers had pared down the ancient techniques to their bare essentials, then optimized hard. A combination of drugs, meditation, and ecstatic dance that could catapult you to enlightenment in the space of a weekend retreat, 100% success rate. Their cult/movement/startup, the Order Of The Golden Lotus, spread like wildfire through California – a state where wildfires spread even faster than usual – and then on to the rest of the world. Soon investment bankers and soccer moms were showing up to book clubs talking about how they had grasped the peace beyond understanding and vanquished their ego-self.

Again, not all the paraphernalia should be taken literally (obviously lotuses and robes and pagodas and things aren't hard-coded into enlightenment), but what it does express is the idea that, if it's possible, you can boil Buddhism down to a single essence which can be mastered by a sufficiently determined or intelligent person pretty quickly. See also: PNSE, and those articles Scott writes about jhanas.

But - the thing is, Buddhism is not in fact like that. You cannot reduce Buddhism to One Weird Trick. (Rakshasas HATE him!) You'd think there might be something to learn from the fact that actual Buddhists have been doing this for thousands of years and might have made some discoveries in all that time. Maybe not all the accretion is cruft. In fact for most practicing Buddhists, even very devout ones, enlightenment is understood to be a project that will take multiple lifetimes. And in fact what enlightenment is may have a bit more to it than they think.

Yes, meditation is something that Buddhists do, and it's important to them, but Buddhism is not just about meditating yourself into a weird insight or into an ecstatic state of mind. One of the insights of Zen is that people get those insights or ecstasies all the time, and by itself it doesn't mean much. Buddhism's substantive metaphysical doctrines go considerably beyond impermanence, its ethical doctrines are extremely rich, and its practices merit some attention as well.

Again, I realise that 'Samsara' is a joke, and as a joke I think it's funny. "What if it were possible to boil Buddhism down to a weekend? This is, of course, ridiculous, but wouldn't it be funny?" Yes, it is. But read in the context of Scott's other writings on Buddhism, I think there is a failure to encounter the tradition beyond the small handful of elements that he and writers like Ingram have picked out as 'core' and fixated on.

I think there is a genuine spiritual vision to 'Moloch' - it's the same one in 'The Goddess of Everything Else' and even to an extent in 'Wirehead Gods on Lotus Thrones'. It's a vision that sees nature as cruel, ruthless, and arbitrary, and which exalts rather in its replacement by conscious organisation in the interests of consciousness. Or at least, in the interests of intelligence, since I think the rationalists have a very minimal (I would say impoverished) definition of consciousness as such. There was a tagline on an old rationalist blog - was it Ozy's? - that I felt summed up this religion well: "The gradual replacement of the natural with the good".

"Wirehead Gods on Lotus Thrones seems to come to the opposite conclusion:

I am pretty okay with this future. This okayness surprises me, because the lotus-god future seems a lot like the wirehead future. All you do is replace the dingy room with a lotus throne, and change your metaphor for their no-doubt indescribably intense feelings from “drug-addled pleasure” to “cosmic bliss”. It seems more like a change in decoration than a change in substance. Should I worry that the valence of a future shifts from “heavily dystopian” to “heavily utopian” with a simple change in decoration?

"The gradual replacement of the natural with the good" seems open to interpretation, out of context - I might guess that was a pretentious neo-Hobbesian appeal, which isn't outside rationalists' overton window.

Yes, meditation is something that Buddhists do, and it's important to them, but Buddhism is not just about meditating yourself into a weird insight or into an ecstatic state of mind. One of the insights of Zen is that people get those insights or ecstasies all the time, and by itself it doesn't mean much. Buddhism's substantive metaphysical doctrines go considerably beyond impermanence, its ethical doctrines are extremely rich, and its practices merit some attention as well.

Can you elaborate on this? Scott's writings on Jhanas include raising the question of why people who reach them don't try to spend more time in what is, at face value, a purely positive state, so this is interesting.

It's strange, from the outside - even going back to their beginnings in the early 2010s, AI nonsense, and in general speculative technology, always seemed like one of Less Wrong's weakest points. It was that community at its least plausible, its least credible, and most moonbatty. Where people like Scott Alexander were most interesting and credible was in other fields - psychiatry in particular for him, as well as a lot of writing about society and politics.

So for that whole crowd to double down on their worst issue feels mostly just disappointing. Really, this is what you decided to invest in?

So for that whole crowd to double down on their worst issue feels mostly just disappointing

AI was the whole point and focus. The sequences and overall movement were just a method to teach people what they needed to know, to be able to understand the AI argument. A la Ellul or Soviet literacy programs, you need to educate people to make them susceptible to propaganda.

Is there a community that has out performed rationalists in forecasting AI? Scott's own 2018 forecast of AI in 2023 was pretty good, wasn't it??

I have roughly two thoughts here:

Firstly, I don't think that's a very substantial forecast. Those are very safe predictions largely amounting to "things in 2023 will be much the same as in 2018". The predictions he got correct were that a computer would beat a top player at Starcraft (AlphaStar did that in 2018), that MIRI would still exist in 2023 (not actually about AI), and about the 'subjective feelings' around AI risk (still not actually about AI). These are pretty weak tea. Would you rate him as correct or incorrect on self-driving cars? I believe there have been a couple of experimental schemes in very limited areas, but none that have been very successful. I would take his prediction to imply coverage of an entire city and for the cars to be useable by ordinary people not specially interested in tech.

Secondly, I feel like predictions like that are a kind of motte and bailey? Predicting that language models will get better over the next few years is a pretty easy call. "Technology will continue to incrementally improve" is a safe bet. However, that's not really the controversial issue. AI risk or AI safety has been heavily singularitarian in its outlook - we're talking about MIRI, née the Singularity Institute, aren't we? AGI, superintelligence, the intelligence explosion, and so on. It's a big leap from the claim that existing technologies will get better to, as Arjin put it, AGI "achieving total control of humanity's minds and souls".

Being right about autonomous driving technology gradually improving or text predictors getting a bit faster doesn't seem like it translates to reliability in forecasting AI-god.