site banner

Culture War Roundup for the week of January 27, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

It's an odd choice of example because quite a few people are killed annually trying to rescue children from bodies of water. It's not risk-free.

The original hypothetical from Peter Singer is a child drowning in a shallow pond, where you could just walk over and pull them out. It is designed to be a zero risk situation.

I say the kid (or his parents) owes the rescuer a new suit, which short circuits the whole thing.

And at the end of the day, this is the problem -- I haven't spent enough time reading literature responding to it, so hopefully this critique is already well documented -- this is an un-trolley problem. It's designed so that there's absolutely no opportunity cost. But then used to imply therefore, the opportunity cost of other scenarios are handwavable.

If I'm walking by a pond where there's a drowing child; in all likelihood, rescuing that child is the most valuable thing I can do in that moment, and the ruin of a 1k suit, that I'm already wearing is a sunk cost.

But this doesn't extend to prove that some future fungible time and money, there's a best thing to do and thus it is a moral imperative to have it done.

As soon as we add any actual opportunity cost to saving that child or ruining the suit, the parsimony of the aesop falls apart. Suppose I'm risking being late and waterlogged for a very demanding interview, and nearly guarantee I won't get the job, a job at which will save many lives if done well, and I am especially best qualified to do it right.

At that moment, it just becomes a regular trolley problem, with a little bit of forecasting mixed in, and there's nothing really to gleam from it.

If alternatively we take the most superficial lesson from the problem: We should help others when we are able, at a cost to ourselves, even when we aren't physically near them. Then sure! It's a great reminder. And it has just about nothing to say about government spending on foreign aid.

Suppose I'm risking being late and waterlogged for a very demanding interview, and nearly guarantee I won't get the job, a job at which will save many lives if done well, and I am especially best qualified to do it right.

You've added in the factor of saving multiple lives instead of one life (at the cost of a nice suit), which is saying something different from the original. The original means to point out the moral obviousness of saving the child at very little real cost.

To me, Singer's hypothetical points out that there are people you or I can easily help/save at a financial cost that we normally don't blink an eye at (the cost of a phone, a suit, a plane ride...), at no real danger to ourselves. This isn't a philosophical imperative, it is an observation. The observation becomes obvious when the person is next to you, but it still exists when the person in trouble is on another continent. Of course, most people are viscerally affected by someone in danger next to them, and generally have no reason to think of anyone on another continent. Singer's hypothetical attempts to address that disconnect.

At that moment, it just becomes a regular trolley problem

I ised to have a lot of beef with the trolley problem because it is almost tautological in its obviousness. But reading some variations on it by the original author, and knowing she was a virtue ethicist, the major point I took away was that real-world moral decision-making is hard! The trolley problem is easy, but recognizing when you are faced with a "trolley problem" in real life, and figuring out which track is which, is difficult. Humans are concerned with ethics, but we have to practice to be discerning and virtuous. Ethics are not (just) a math problem.

nothing to say about government spending on foreign aid.

I don't know if that follows from what you said. I can see why foreign aid isn't a consideration in a vacuum. But I would think that if helping people who are far away at a low cost to ourselves is considered a good thing, it is a consideration that a society and its government can make on a grand scale.

Suppose I'm risking being late and waterlogged for a very demanding interview, and nearly guarantee I won't get the job, a job at which will save many lives if done well, and I am especially best qualified to do it right.

You've added in the factor of saving multiple lives instead of one life (at the cost of a nice suit), which is saying something different from the original. The original means to point out the moral obviousness of saving the child at very little real cost.

Yes I know, my point was in agreement with yours. That's why I said the original is an 'un'trolley problem. My point in describing some additional opportunity cost was exectly to illustrate that opportunity cost ruins the thought experiment.

And that's why it has very little to say about foreign aid or most other real world charitable activities that are abstracted from time and place. Because outside of immediate and present opportunities (like saving a drowning child right in front of you), opportunity cost does have to be considered.

And as you've agreed, it becomes different than the thought experiment, thus the thought experiment is no longer relevant.