site banner

Culture War Roundup for the week of August 14, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

11
Jump in the discussion.

No email address required.

Is it still a good idea to risk loads of people to save just one?

Many religious people, moral extremists of many types, the very elderly, and others will all choose blue to save one, yes. So now we iterate once, is it a good idea for the somewhat less moral people to choose blue to save the more moral people? I'd say so, and I'd say those two groups account for at least half already.

Now you're the one assuming your conclusion.

I don't believe those people to be more moral. I think the opposite actually.

But let's follow this thought. Okay you may be fine if you iterate once. What if you iterate forever? How long until the high trust society eventually collapses because people figure out they can avoid the risk entirely by shirking the norm? And once they do will you still be able to argue that the house of cards was moral?

Now you're the one assuming your conclusion.

How so? My only assumption is that some people will choose blue to try to save a single life. This is obviously a safe assumption.

I don't believe those people to be more moral. I think the opposite actually.

OK, just substitute "moral" in my comment for "@Meriadoc's idea of moral" and it remains just as valid so long as you care about human life at all. My point is not to argue that such people are actually moral. I believe they are, but that's not what this thought experiment is about anyways. The point is that even if the premise says only 1 person will definitely choose blue, I know for a fact that more will.

What if you iterate forever? How long until the high trust society eventually collapses because people figure out they can avoid the risk entirely by shirking the norm?

This isn't iteration at all, this is just "when people think more about the question they'll come around to my point of view." I disagree.

And once they do will you still be able to argue that the house of cards was moral?

As I've said before, my answer would change if I thought blue wasn't attainable.

This isn't iteration at all, this is just "when people think more about the question they'll come around to my point of view." I disagree.

This game only has two equilibriums: everyone takes blue or everyone takes red.

Are you really going to argue that the iterated dynamics make it tend towards blue? I just don't see how when it's just like the prisoner's dilemma except with no personal upside to cooperation .

You're essentially asking for people to do something that is in the group interest but against their personal interest. I don't give this social experiment more than 70 years.

I just don't see how when it's just like the prisoner's dilemma except with no personal upside to cooperation

There's no personal upside to defection either, assuming enough people cooperate. So yes, red is probably more stable, but blue is plenty stable so long as that's where things start.

This game only has two equilibriums: everyone takes blue or everyone takes red.

You mean >50% of people take blue or everyone takes red.

You're essentially asking for people to do something that is in the group interest but against their personal interest.

I wouldn't say it's necessarily against personal interest, unless by "personal" you mean "interest in their own life." Iteration means you can figure out that your grandma or disabled cousin will pick blue. Maybe it's in your personal interest to keep them alive even if you risk your own life to do so.

You mean >50% of people take blue or everyone takes red.

No.

Remember, in the iterated version people make decisions based on what they think the result might be based on previous results and their impression of other people's strategies.

This is made clear at the margins, let's say that it's 50/50 exactly and it only takes one person to flip the next time to get the blue team killed. How many of the previous people in blue will still want to take blue? For this to be stable it will have to be the exact same amount or more. I don't think this is likely.

Everyone taking the same pill is stable because there is no reward to be gained by changing your choice, so everyone expects everyone else to keep picking that same pill.

60 blue 40 red doesn't have this property. Blue people can start minimizing personal risk without endangering the group if they think they'll be in small numbers doing that. This isn't stable.

It all depends on the framing. If you've declined steadily from 100% blue to 50%, yeah, staying blue might be risky. If you've gone from 100% red to 50%, There are now probably plenty of blue people who will feel comfortable joining in.

As far as the actual, Official Game Theory goes, blue and red are both stable, and not just at 100%.

60 blue 40 red doesn't have this property. Blue people can start minimizing personal risk without endangering the group if they think they'll be in small numbers doing that. This isn't stable.

60 red 40 blue has that same property. Blue people can start maximizing >50% blue risk with greater confidence their decision matters. Both colors are emboldened as the ratio gets closer to 100% either way, so both are stable.

Both colors are emboldened as the ratio gets closer to 100% either way

I think this entirely depends on how we write the reward function, actually. Which I think we might be disagreeing about.

I think this entirely depends on how we write the reward function, actually.

Well I think it depends on how each person writes the reward function. The original poll skewed blue, so I don't think I'm typical-minding too much by thinking many people are genuine altruists, but it's impossible to tell without a real-life exercise.

More comments

You mean continue pondering, ie after having realized that the dumb might vote blue, you realize that moral extreme save-everyone nice grandmas will also realize this and vote blue, which means all those who want to save nice grandma's will also vote blue. So maybe you don't think nice naive grandma's are worth saving, but even so, perhaps people who want to save nice grandmas are worth saving. This is your idea of iteration.

What I mean is, take a simple set of rules and iterate over them a few times. For example:

  1. Some amount of people are willing to risk death to save people who they think will choose blue

  2. They choose blue. The number of people who choose blue according to this thought process grows.

  3. Repeat.

One iteration would be running through that process once.

The other person meant iteration as in "iterated prisoners dilemma", ie repeat this poll again and again.

The iterated prisoner's dilemma is all part of one game. Repeating the poll again and again would not be the iterated prisoner's dilemma because the result of one poll would not affect the next at all--it would be part of a different game. If your overall survival depended on the cumulative result of all the polls, then that would be closer to actual iteration.

Besides that, I don't think they actually meant "repeat this poll again and again." What they said was:

But let's follow this thought. Okay you may be fine if you iterate once. What if you iterate forever?

This clearly implies that they're using it the same way I am, and I obviously didn't mean "repeat this poll" so I don't think that's how they meant it either.