This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
I agree with pretty much all of this, though I’d add the autobiographical aside that my views on the death penalty have gone from strongly opposed on principle a decade or so ago to weakly opposed on procedure today. Extrapolating my direction of travel, I can see myself overcoming my procedural scruples in time.
That said, it’s quite puzzling to me from a rationality and decision-theoretic framework to incorporate these kinds of predicted value-shifts into your views. For example, imagine I anticipate becoming significantly wealthier next year, and I observe that previously when I’ve become wealthier my views on tax policy have become more libertarian. What’s the rational move here? Should I try to fight against this anticipated value shift? Should I begin incorporating it now? Should I say what will be will be, and just wait for it to happen? Should I actively try to avoid becoming wealthier because that will predictably compromise my values?
Related to some AI discussions around final vs instrumental goals, and under what circumstances it can be rational to consent to a policy that will shift one’s terminal values.
Isn't this the problem that Rawls' Veil of Ignorance is designed to solve?
Given, that is normally offered to justify a socialist solution to problems. The Veil of Ignorance is offered to the rich man to say, imagine if you were poor, wouldn't you prefer a socialist system?
But there's nothing in the mechanics of the Veil of Ignorance that prevents it from being used the opposite way: imagine you were rich, would you dislike any of the redistributive policies you currently advocate for?
Given, it suffers from the flaw of many philosophical tools, in that it relies on "then think really hard about it" as the final step. But it's the clear solution to the value shifts: try to imagine a system of values that would appeal to you regardless of your position.
I don't at all agree with Rawls, but I think the point is that there are far fewer rich than poor.
More options
Context Copy link
There is, the mechanic is "would you hate being poor in a dog-eat-dog world more than you'd hate being taxed a lot as a rich man?".
@anon_
Sure, but that's just a percentage thing, easily disposed of. Rawls would tell you that some degree of redistribution is optimal, but it can still justify Capitalism on a "more goods produced" logic, and set the level of redistribution to maximize everyone's happiness. That's a logic that holds from behind a veil of ignorance. What one shouldn't do within Rawls' paradigm is undertake policies that are not overall utility-maximizing.
Nor is mere quantity of poor and rich people enough to make anything justifiable. Neutral between whether I am the one or the other, I can still feel that there is some level of "fans harassing famous person" that isn't morally correct, for example.
More options
Context Copy link
More options
Context Copy link
The kind of values shift I have in mind is one that is indifferent to one's position, i.e., not just filling in the variable according to one's position within it. For example, imagine you have a choice of three college courses you can take: one on libertarianism, one on Marxism, and one on library research. The first two are probably going to be more interesting, but you're also aware that they're taught by brilliant scholars of the relevant political persuasion, and you'll be acquainted with relevant rationally persuasive evidence in support of this position. Consequently, you know that if you take the libertarianism course, you'll come away more libertarian, if you take the Marxist course you'll come away more Marxist, and if you take the library research course you'll come away knowing more about libraries. Assuming the first two courses would indeed involve a values transition, under what circumstances might it be rational to undergo it?
If you really knew in advance that the courses contain rationally persuasive evidence for X, you should immediately believe X even without taking the courses based on your knowledge that the rationally persuasive evidence exists.
I doubt that you know that the courses contain rationally persuasive evidence for X. What you do know is that after taking such courses, you feel that you have been rationally persuaded. But being irrationally persuaded feels like being rationally persuaded.
More options
Context Copy link
On the off-chance you aren't aware of this already, a similar thought experiment is discussed in Parfit's "Reasons and Persons" and Korsgaard's "Self-Constitution".
More options
Context Copy link
I'm not sure it's ever rational to choose which values you will be inculcated in and then forget all about the choice. Ie if you take the course on Marxism, you should later realize that fact and keep it in mind when making value judgments.
Nor am I sure such a thing is entirely possible. I know I spent years of my life trying to shop for a religion that would inculcate values that I liked, only to realize that it was impossible to really believe in a religion learned under those circumstances.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I think it's the sign of a particularly self-aware mind. The spectrum would go like this:
Personally I have always had a hard time pinning down my actual beliefs. I have the habit of being a devil's advocate in the extreme, defending positions whenever I see a hint that they might actually be defensible. So I would probably incorporate the anticipated value shift, even if I find myself on shakier ground to defend for now.
I'd go one step further. What he expressed is closer to
More options
Context Copy link
I first saw this taxonomy on the perhaps slightly unfortunately named Hoe Math YouTube channel. Is that where you caught it too, or is it a more established framework?
No, that's just how I schematize it personally. Thanks for the link, though, that's a very interesting video!
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link