site banner

Culture War Roundup for the week of December 4, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

One thing that's always bugged my about progressivism and especially EA is that despite all their claims of being empathetic and humanistic they completely ignore the human. They are ironically the paperclip maximizers of philanthropy.

Once again for those who might just be joining us. Utilitarianism is an inhuman (and dare I say it, Evil) ideology that is fundamentally incompatible with human flourishing. Utilitarians deciding to ignore the human cost of a policy to maximize some abstract value be it "utility" or "paperclips" is not ironic, unfortunate, or unintentional. It is by design.

"Effective altruism" has never been about altruism.

I will admit I consider my self a 'skeptical utilitarian'(I made this term up, or, if I didn't, I am unfamiliar with the other usage) in that I have utilitarian leanings in terms of how to reason about morality but reject unpalatable extreme extrapolations thereof, on 'eulering' and 'epistemic learned helplessness' grounds. Still I have always found casual swipes at utilitarianism of the form, 'see, it actually leads to bad things' to be weak. Clearly the goal is to lead to good things, broadly, and if it seems to lead to a bad thing then that probably means you should try again and fully considerer the externalities, etc. I don't see a good reason why 'utility' can't be a proxy measure for human flourishing, and I would personally prefer a form of utilitarianism organized in just such a way.

Clearly the goal is to lead to good things, broadly, and if it seems to lead to a bad thing then that probably means you should try again and fully considerer the externalities, etc.

I can declare that the "goal" of a live grenade is to be delicious candy for children, but that won't make it so. The argument against Utilitarianism is 1) that it can't actually do what it aims to do, because "utilitarian calculus" is about as valid as "turnip plus potato equals squash", and 2) when it inevitably fails, it tends to fail very, very badly.

"Fully considering the externalities" is straightforwardly impossible, the output it generates is unfalsifiable, and it is tailor-made to justify one's own biases.

I don't see a good reason why 'utility' can't be a proxy measure for human flourishing

Because "utility" can't be rigorously measured, quantified, or verified in any way, even theoretically, and the whole system is built on the premise that it can be.

I should have known better than to comment on this topic here, I am not very rigorous or deep in my metaphysical beliefs.

Let me try and clarify my internal view, and if you have the time, you can explain what I am doing wrong.

So, I view my own morality and the morality of my society through a largely consequentialist lens, understanding that my ability to fully understand consequences decays rapidly with time, and is never perfect. I view morality as a changing thing that adapts and morphs with new technology, both social and physical. I find the 'concept' of 'utilitarianism' a useful jumping off point for thinking about morality. Obviously this interacts with my own biases, I am not really sure what it would even mean for a person to think about something and not have that problem honestly. I do not view 'utilitarianism' as a solved, or solvable problem, rather as a never ending corrective process.

For example, I am not currently vegan or vegetarian, but I also do not like animal suffering, and I think a lot about this disconnect. Ideally I would like a world that allows me to enjoy all the perks of animal husbandry while reducing as much animal suffering as possible. I think the effort of trying to reduce the amount of suffering in factory farming, reflects a 'utilitarian' effort, but that does not mean I would agree with any particular reality those intuitions suggest. If for example, reducing animal suffering, made it impossible for a lot of people to afford meat or eggs, then that also seems bad, and is another part of the problem to keep working on or striving for solutions to.

My biases manifest in a number of ways, for example, I lean towards observational data in terms of what a better or worse world would look like, so for example, if a particular religion espoused the idea that animals enjoy animal husbandry and or they can only go to heaven if eaten by humans, I would not factor that into my considerations. I also tend to think suffering is bad and happiness and a fulfilment/satisfaction are good, etc.

I guess I view 'morality' as a system or framework that I use to try and evaluate my own actions and the actions of others. I am reliant on the persuasiveness of my arguments in favor of my preferred outcomes to drive other people (and sometimes myself) to respect or adopt a 'morality' similar to my ideals.

Well said.

For what it's worth, I largely agree, to be more blunt than you, I'm both a moral relativist and a moral chauvinist. I make no claims that my sense of morality is objective, and go so far as to say that there's no such thing, not a single good reason to imagine it can be so, that morality can be disentangled from the structure of an individual observer and forced to fit all others. The closest you can get is the evolutionarily/game theoretically conserved core, such as a preference for fairness and so on, which can be seen in just about any animal smart enough to hold those concepts. That's still not "objective". That doesn't stop me from thinking that mine is superior and ought to be promulgated. It's sometimes tempting to claim otherwise, but I largely refrain from doing so. I don't deny the rights of others to make such a claim about theirs, to the extent that I approve of free speech.

Of course, I personally find that I can decompose my value judgements and then derive simpler underlying laws/heuristics that explain them, which often explain new and complicated situations, but I'm lucky enough that I have yet to find one I can't resolve in that manner, and I can see that I have principles instead of a lookup table because it can often involve me grudgingly accepting things I dislike because to do otherwise would conflict with more fundamental principles I prefer to hold over mere dislike. That's why I'm OK with people I despise speaking after all, leaving aside I have no way to stop them.

As for animal welfare, I simply do not care. It's a fundamental values difference. I don't get anything out of torturing or killing subhuman animals, but I also have nothing against those who do, to the extent that cultural pressures imply that that those who shirk them have other things wrong with them, like psychopathy. As discussed in an older comment, at a point in time, most people enjoyed watching dog fights or throwing rocks at cats, there was nothing/little in the act that was inherently psychopathic in terms of harming others.

To illustrate, imagine a society that declares shaving one's head to be a clear sign of Nazi affiliation. There are plenty of normal people who have some level of desire to do so, be it for stylistic preferences or because they're balding. But since such an urge is overpowered by a desire not to be mistakenly labeled as a Nazi, they refrain, while actual Nazis don't.

Congratulations, you managed to establish that shaving one's head is is 99% sensitive and specific for National Socialist tendencies.

You can see this kind of social dynamic and purity spiraling all over the place, and I think animal welfare is one of them, so is not calling people fags or retarded.

I do not value the elimination of factory farming for its own sake, or that of animals, but I will happily accept something like vegetarian meat or, better yet, labgrown meat, over it, but if, and only if it's superior to factory farmed or slaughtered meat in terms of taste or price, ideally both. That's what it means to be truly neutral between them.

Only if you assume "utility" is decoupled from human flourishing. Which it shouldn't be.

"Effective altruism" has never been about altruism.

Oh? What's it about, then? Bonus points if your criticism applies specifically to EA, and not just to any action that might vaguely leave room for self-interest.

Only if you assume "utility" is decoupled from human flourishing. Which it shouldn't be.

And yet, It very obviously is.

Oh? What's it about, then?

Grift. Silicon Valley sociopaths trying to rebrand slacktivism (that is, "earning to give" and "raising awareness") as a public good rather than what it actually is. Funneling funds into their own pet projects. *cough* AI Research *

As I've discussed before, back in late 2013/early 2014 timeframe I approached a few of the more prominent EA types and offered my services. I had contacts in the DoD, MSF, and multiple eastern African Governments. I could have actually helped with the nitty-gritty of getting bed-nets and water-filters distributed to people. The response I got was that they weren't really interested in logistics so much as they were interested in raising money.

This might not be a failing unique to Effective altruism, but I do think it's enough to condemn them.

The whole vegan menu fiasco later that year only confirmed it.