site banner

Culture War Roundup for the week of January 27, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

I'm pretty sure that's not how it works

I think you're seriously underestimating rationalists' capacity to rationalize.

Timeless decision theory is (and always has been) an excuse to do what you were going to do anyway.

It's the old leftist fallacy of "society is to blame" writ at a metaphysical level. You can't blame me for the consequences of my actions, I was mearly a pawn of universal forces.

Rationalist here. Timeless decision theory was never explicitly designed for humans to use; it was always about "if we want to have AIs work properly, we'll need to somehow make them understand how to make decisions - which means we need to understand what's the mathematically correct way to make decisions. Hm, all the existing theories have rather glaring flaws and counterexamples that nobody seems to talk about."

That's why all the associated research stuff is about things like tiling, where AIs create successor AIs.

Of course, nowadays we teach AIs how to make decisions by plain reinforcement learning and prosaic reasoning, so this has all become rather pointless.

My understanding of timeless decision theory is that you are deciding for every entity sufficiently similar to you. So, you’re making decisions for yourself at different points in time, as well as anyone else who might be sufficiently similar to you at the same time. Well, technically, this would make backwards causality… Kind of a thing you could think about, it really doesn’t seem all that relevant to how you would use it to actually make decisions. Instead, it adds weight to the decisions you’re trying to make, by spreading the consequences farther than you would normally expect them to go.

But that was from over a decade ago. It’s entirely possible that it’s become a lot more insane since then.