site banner

Culture War Roundup for the week of January 16, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

13
Jump in the discussion.

No email address required.

It is precisely the ability to convert between mild experiences and extreme experiences at some ratio that allows everything to add up to something resembling common-sense morality. If you don't, if the ranking of bad experiences from most mild to most severe has one considered infinitely worse than the one that came before, then your decision-making will be dominated by whichever potential consequences pass that threshold while completely disregarding everything below that threshold, regardless of how unlikely those extreme consequences are.

Yes, this is essentially how I think morality and decision-making should work. Going back to your word choice example, the actual word choice should matter not at all in a vacuum, but it has a chance of having other effects (such as better book sales, saving someone's life from suicide, etc.) which I think are much more likely than the chance that typing in the extra word causes chronic torturous pain.

In real life, small harms like stubbing a toe can lead to greater harms like missing an important opportunity due to the pain, breaking a bone, or perhaps snapping at someone important due to your bad mood. If we could ignore those side effects and focus on just the pain, I would absolutely agree that

your decision-making will be dominated by whichever potential consequences pass that threshold while completely disregarding everything below that threshold, regardless of how unlikely those extreme consequences are

With the appropriate caveats regarding computation time and other side effects of avoiding those extreme consequences.

I do not think this is a meaningful statement. We can decide which scenario is preferable and call that something like "net utility" but we can't literally "add up" multiple people's experiences within a single person.

See this is kind of my point. I don't think we can just say that there's "net utility" and directly compare small harms to great ones. I agree that it doesn't necessarily make much sense to just "add up" the suffering though, so here's another example.

You're immortal. You can choose to be tortured for 100 years straight, or experience a stubbed toe once every billion years, forever. Neither option has any side effects.

I would always choose the stubbed toe option even though it adds up to literally infinite suffering, so by extension I would force infinite people to stub their toes rather than force one person to be tortured for 100 years.

edit: One more thing, it's not that I think there's some bright line, above which things matter, and below which they don't. My point is mainly that these things are simply not quantifiable at all.