This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
Unrelated to the central topic, but:
Scott just recently wrote a post about this.
While I recognize that you can construct hypothetical examples where a utilitarian is forced to agree to something unpleasant, or imagine a lazy utilitarian who makes up half-baked arguments for why whatever they want to do is utilitarian optimal and that's why they get to violate strong heuristics/taboos/norms, I think those are thought experiments a lot more than they are descriptions of reality.
In reality, followers of other moral systems (or of no coherently named moral system) seem to me to make up lazy rationalizations for why to do whatever they want to do a lot more often than utilitarians, and are a lot easier to force into distasteful hypotheticals to boot.
The fact that actions have long-term consequences like 'all trust and honor across society breaks down' is not separate from utilitarianism, it's a part of the calculation, and that's why most utilitarians I talk to think about that stuff a lot more than most other people I know, and end up sticking to broad heuristics in most real-world cases.
We have noticed the skulls, as it were, and I think other moral systems which don't require you to think carefully and make explicit calculations and use your own best judgement under uncertainty, fail to teach their adherents the same carefulness. In practice, I think utilitarians end up doing better on average - obviously not perfect, but better than average.
I think something like utilitarianism seems to do a better job of being an all-encompassing theory of moral action.
Do you think in practice, it might be easier to rationalize?
A straight-up moral prohibition of adultery leads to less adultery than "well, do whatever maximizes pleasure," I would think, even if the latter should come to the same conclusion once you consider second-order effects. It's just too easy to do motivated reasoning.
Edit: I could have sworn I came across something at some point about the skulls quote. But while looking a little, I came across these comments from @DaseindustriesLtd.
So I am honestly making the maybe-crazy prediction that no, the average utilitarian will actually commit less adultery than the average person who follows a religion that says 'though shalt not commit adultery' or the average person with some type of deontology/virtue ethics which strongly says 'cheating is bad'. I'm not insanely confident about this or anything, could easily be wrong, but I'd bet $50 on it (if I were talking to someone at a bar I mean, I'm not going to go to the hassle of setting up an anonymous online exchange for that amount).
Now, caveats.
First, what do we mean by 'adultery', I do think that utilitarians are more likely to negotiate open relationships/polyamory, which I don't consider adulterous. I really mean cheating, in the sense of violating explicit or very obviously implied agreements about the nature of the relationship. If utilitarians have an advantage of more permissive relationships, I consider that a fairly won victory.
Second, 'average person'. I'm counting everyone who would say that they are Christian (or other religions with similar prohibitions) regardless of how devout or observant they are. I'm counting everyone who would say 'yeah adultery/cheating is obviously bad/wrong/evil' but doesn't give an explicitly utilitarian accounting of why that is. I do think this means that the average person in that group will be less interested in moral quandries and less thoughtful about moral issues and less concerned with matching their morals to their actions than the average utilitarian. I again consider that a fairly won victory, because utilitarianism involves learning to make those judgements for yourself instead of relying on handed-down maxims or simplistic rules, so I think that higher level of average observance is part of its strength. But you could argue that it's popular among academic weirdos who are a better starting stock, and therefore not a fair comparison group, if you wanted to.
More options
Context Copy link
More options
Context Copy link
It’s not a question of “agree to something unpleasant.” The problem being that because there are no lines that may not be crossed, that almost any act can become thinkable given the right set of circumstances. Me killing you to save others is thinkable provided that the others are either more valuable or there are more of them.
Ok, but again, I don't actually think that non-utilitarians are better about avoiding 'unthinkable trade-offs'.
Like, some number of christians or deontologist or virtue ethicists or whatever will in practice, in real life, trade some lives for others, either implicitly through policy or explicitly when faces with the rare real-world situations where that decision comes up.
Like, they don't actually just halt, stop, and catch fire in those situations when they encounter something their morality says is 'unthinkable', they just sort of make a decision, like everyone does, like normal.
And in those types of situations, I would expect utilitarians to mostly make better decisions and better trades, because they're allowed to think about and consider and make plans for those situations before encountering them, and just generally because of the habit of thinking about when and how to make moral tradeoffs.
I don't know if you have a more concrete real-world example you'd like to frame this under, I'm kind of at a loss for thinking of real-world instances besides things like 'risk your platoon to save one wounded soldier', which a. I don't know if that ever actually happens outside movies, b. I don't know what normal people actually do in that situation statistically, and c. I expect utilitarians to have no trouble applying hueristics like 'having faith in your comrades every day is more valuable than protecting the platoon the once every 20 years this actually comes up' or w/e.
I’m not saying it never comes up, but if I’m a deontologist, and I subscribe to the ideas in the Declaration of Independence (all men are created equal, they have inalienable rights to Life, Liberty, and the Pursuit of Happiness) then there are a lot of things that at minimum it would be very very hard to get me to do. Summary detention of a bunch of people isn’t something that should, in my view be on the table. There might be some extreme cases where you have little choice, but getting there isn’t going to be easy, and it would only happen when there’s no other options.
The problem with utilitarian thinking is that those very bright lines aren’t there as a check on behavior. I can do anything I want, with the only caveat being that in my calculations the results are better than whatever I assume would happen if I didn’t do that. And depending on what things I put more weight on, or in what parts of society I judge to be more important, or who I judge more important. There’s no reason why I couldn’t discount the welfare of the poor, or of minorities, or women, or gingers. There’s also no reason I can’t choose the welfare of the elites, the majority ethnic group, men, or bald guys as more important than everyone else.
More options
Context Copy link
I think a better way to understand the fundamental conflict is to think less in terms of "unthinkable trade-offs" and more in terms of "necessary evils" More pointedly that utilitarianism as it is typically advocated for in rationalist spaces does not seem to handle such scenarios gracefully. Instead of being able to acknowledge that [action] is bad but [action] was also necessary/understandable given the circumstances it instead seems to default to a position where [action] was necessary/understandable given the circumstances ergo [action] cannot be bad and must have actually been good or at least neutral.
I see Scott's defense of Fauci in this post here and his earlier posts on Kolmogorov Complicity and the Virtue of Silence as classic examples of the problem, sure sometimes betraying the public trust is the rational choice, but by betraying the public trust you have demonstrated yourself to be untrustworthy and can no longer honestly claim to be "the sort of person who cooperates in prisoners dilemmas" because you aren't, you're a defector.
That's just a semantics question over what "bad" means. You can say "hurting someone in self-defense is always bad, but sometimes it is the best option" or you can say "hurting someone in self-defense is not bad" and you're really saying the same thing.
Yes, and at the same time it also illustrates the fundamental problem with utilitarianism, namely that it is the ethical framework that makes it easiest to excuse one's own negative behavior.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link