site banner

Culture War Roundup for the week of December 4, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

Scott Alexander has recently argued in favor of Effective Altruism after the new scandal of effective altruists trying to oust Sam Altman from Open A.I.

His argument starts by focusing about how different factions attack EA from different perspectives that are contradictory. That those on the right call them woke and those on the left call them fascists and white supremacist. The point seems to be implying that they are going to be attacked anyway by all sides no matter what, so we shouldn't take seriously such criticisms. Then he mostly focuses on an estimated 200,000 lives saved in the developing world.

My problem with this is that it obscures something that isn't a mystery. Which is that EA's politics align much more with the Democratic establishment than with the right and there isn't any substantial confrontation of what that means.

The biggest donor of Effective Altruism according to my short research and claims I found in the effective altruism forum from 2022 where he participated in such discussion is Asana CEO Dustin Moskovitz.

Asana, his company contributed 45 million in the 2020 election and he also had an important contribution in millions in the future forwards pac

https://www.opensecrets.org/2020-presidential-race/joe-biden/contributors?id=N00001669 https://www.opensecrets.org/news/2020/10/pro-biden-super-pac-darkmon/ https://www.cnbc.com/2020/11/02/tech-billionaire-2020-election-donations-final-tally.html https://bluetent.us/articles/campaigns-elections/dustin-moskovitz-cari-tuna-democratic-donor-2020/

If one looks at open philanthropy or the EA forum and searches for controversial cultural issues there can be sometimes a small dissent but they follow the liberal party line for the most part.

Lets look at open philanthropy, an EA organization and Dustin Moskovitz organization. Scott certainly wants to give credit to EA and open philanthropy for promoting YIMBY.

However this organization has also funded decriminalization policies and pro migration policies.

https://www.openphilanthropy.org/focus/criminal-justice-reform/ https://www.openphilanthropy.org/focus/immigration-policy/

I wonder if the well funded caravans of migrants we see in some areas of the world have to some extend to do with funding related to EA.

Recently there has been a mini EA scandal where one individual expressed HBD views in the past but this was made a thing and he was condemned by many in the movement, but not entirely unanimously. https://forum.effectivealtruism.org/posts/8zLwD862MRGZTzs8k/a-personal-response-to-nick-bostrom-s-apology-for-an-old

https://forum.effectivealtruism.org/posts/kuqgJDPF6nfscSZsZ/thread-for-discussing-bostrom-s-email-and-apology

Basically, this individual wrote an email 26 years ago that used naughty language to make the point that you should use less offensive language when arguing for race realism.

Then he apologized due to pressure and argued:

What are my actual views? I do think that provocative communication styles have a place—but not like this! I also think that it is deeply unfair that unequal access to education, nutrients, and basic healthcare leads to inequality in social outcomes, including sometimes disparities in skills and cognitive capacity. This is a huge moral travesty that we should not paper over or downplay. Much of my personal charitable giving over the years has gone to fighting exactly this problem: I’ve given many thousands of pounds to organizations including to the SCI Foundation, GiveDirectly, the Black Health Alliance, the Iodine Global Network, BasicNeeds, and the Christian Blind Mission.

Then there is Open A.I. and Chat GPT and effective altruists have been influential in Open A.I. Chat GPT has liberal bias. https://www.foxnews.com/media/chatgpt-faces-mounting-accusations-woke-liberal-bias

Another thing to observe are the demographics of effective altruists.

They are only 0.9% right wing and 2.5% center right. With majority being of the left with 40% center left and 32% identifying as left. But that is identification. Just like Biden could be identified by some as center left while by others, including myself as far left. They are also 46% Vegans. 85.9% are Atheists.

https://rethinkpriorities.org/publications/eas2019-community-demographics-characteristics

I haven't encountered any group with such small representation of right wingers that actually is fair when promoting a political agenda towards either the right wing, or groups that are more seen related to the right. However, effective altruists are much more concerned about the lack of sufficient racial and ethnic diversity than ideological diversity when you search their forum.

Climate change and veganism are two issues that could well lead to hardcore authoritarian policies and restrictions. Considering the demographics of EA and the fact that Peter Singer is an important figure in it and helped coin the term, I do wonder if on that issue the EA influence would be for them to impose on us policies. When dealing with the moral framing of animal liberation movement activist like Singer we see a moral urgency. Like with all identity movements, to elevate one group such as animals you end up reducing the position of another group, such as humans. Or those who aren't vegans.

The issue is that these networks that are reinforced based on EA might already have as part of their agenda to promote their political agenda.. And these networks that developed in part due to EA and put like minded ideologues together to organize can also expand even more to promote their political agenda outside the EA banner.

It does seem that at least a few of the people involved with effective altruism think that it fell victim to its coastal college demographics. https://www.fromthenew.world/p/what-the-hell-happened-to-effective

My other conclusion related to the open A.I. incident as well is that the idea of these people that they are those who will put humanity first will lead to them ousting others and attempt to grab more power in the future too. When they do so, will they ever abandon it?

Scott Alexander himself argued that putting humanity first is the priority and he had some faith on them thinking rationally when they tried to oust Sam Altman, even though he invited them inside. He might not agree with their action necessarily but he sympathizes with the motive. https://twitter.com/slatestarcodex/status/1726132072031641853#m

That this action is dishonorable matters because like with Sam Bankman Fried it continues the pattern of important ethical issues being pushed aside under the idea that effective altruists know best.

This means that Sam Altman won't be the first. It also means that we got a movement very susceptible to the same problems of authoritarian far left movements in general of extreme self confidence to their own vision and will to power. This inevitably in addition to the whole issue of hell paved with good intentions encourages the power hungry to be part of it as well.

It does seem there is an important side to it which is about people donating in more unobjectionable terms but in general effective altruism it isn't separate from a political agenda that fits with a political tribe. That should be judged on its own merits without the 200,000 saved in developing world being accepted as an adequate answer for policies that affect the developed world. The short version of all this is that if you got a problem with leftist/far leftist NGOs, you should consider the effective altruism movement and some of its key players to be contributing in the same direction.

this individual

"this individual"-ing Nick Bostrom is a hilarious way of wiping away his work in promoting effective altruism and longtermism.

One thing that's always bugged my about progressivism and especially EA is that despite all their claims of being empathetic and humanistic they completely ignore the human. They are ironically the paperclip maximizers of philanthropy.

The argument is that despite some of the questionable things EA has been caught up in lately, they've saved 200 thousands lives! but did they save good lives? What have they saved really? More mouths to feed? Doctors and lawyers? Someone that cares about humanity would want to ask these questions. A paperclip maximizer that discounts a persons humanity entirely and just sees each life as some widget to maximize the number of would not.

The purpose of empathy is to be able to put yourself in someone else's shoes, to understand their feelings. Except, to do that you have to have some level of understanding of how they function, some mental model of their mind. Else you are simply projecting. It's easy to just imagine what you'd feel like if you were in Palestine or Israel etc. Except that isn't empathy. Even just listening to what a person says isn't truly empathy. If I were an alcoholic and I said I wanted a drink, to someone that has no knowledge of me it might seem a nice thing to do, but clearly it would not be. I'm not sure what it even means to have empathy for someone you don't know. I'm not sure it's possible. What is it really that you are feeling? Do you believe people are all the same, with the same wants? same needs? some values? It's such a dim view of people and of the world.

I suppose some people do, "We're all human," is something you'll hear espoused by this ideology, but that is literally the least you can have in common with another person. Trying to apply it to any other human interaction is instantly ridiculous. You wouldn't apply that logic anywhere in life, you don't hire someone just because they're human, you don't befriend someone, care about someone, hate someone. It's basically an open admission that you have nothing convincing to say. Even if someone was forced to compliment their worst enemy they'd manage to ad lib something more convincing than, "he's human."

Anyone that has had relationships with other humans, so basically everyone, knows how complicated it is to actually know someone. You can have spent years living with a partner and still be completely caught off guard when your mental model goes awry and your attempt at empathy then completely falls flat. The idea that some ideological group is more moral or more caring because of the sheer number of lives they've saved completely discredits and belittles one of the pillars of being human, getting to know each other, socializing, learning friend and foe. It discounts their humanity itself, that it's even necessary to get to know or to understand someone before you can help them. Your wants and needs don't matter, you are a widget, you need x calories, y oxygen, to continue existing and I will supply these needs, such altruism, wow.

Looking around at social media and world events I can't help but wonder if this is some major glitch with human psychology in the digital age. Too many strangers, too much opportunity for, "selflessness." So many people caught up in an empty and self serving empathy that has no imagination for others. Meanwhile people that have normal empathy are dismissed because they aren't as "selfless" as the newer movements. Spending time with and focusing on people that share your values isn't altruistic because if they share your values than you are less selfless than the progressive who cares about the stranger. (Not to mention the bay area tech bro that managed to save 0.0345 persons per dollar spent, blowing away the nearest tech bro competitor who only saved 0.0321)

This logic seems mad though, taken to it's extreme the most altruistic move would be to help someone that shares none of your values, and since altruism is a core value you should be exclusively helping the least altruistic of people as that is the most selfless thing you could do. Of course this is obviously ridiculous and self defeating (like the lgbt groups supporting hamas)

More cynically I think this sort of caring is just a way to whitewash your past wrongs, it's pr maximizing, spend x dollars and get the biggest number you can put next to your shady bay area tech movement that is increasingly under societies microscope given the immense power things like social networks and ai give your group. If you really want to help others you need to understand them, that means spending time with others, not with concepts. If you're lucky you might eventually find a few people that you understand well enough that more often than not your actions are positive and beneficial to them. Congratulations you have now invented the family and traditional community.

This logic seems mad though, taken to it's extreme the most altruistic move would be to help someone that shares none of your values, and since altruism is a core value you should be exclusively helping the least altruistic of people as that is the most selfless thing you could do. Of course this is obviously ridiculous and self defeating (like the lgbt groups supporting hamas)

That's a misunderstanding. You're implicitly applying a virtue/signaling framing to a consequentialist policy. You should be supporting the least altruistic people iff you want to signal the depth of your commitment to altruism to your peergroup. EA isn't trying to "maximize the depth of the virtue of altruism", it's trying to "maximize the rating produced by the altruism principle." Adherence is "capped" at one - when you already do the maximum good for the greatest number, you cannot adhere even harder by diverging from this concept to avoid also benefitting non-altruist principles. That is, EA does not at all penalize you for your actions also having auxiliary benefits to yourself or your peergroup, if that happens to be the optimal path. Also, utilitarianism is in fact allowed to recognize second-order consequences. That's why "earning to give" and 80,000 Hours exist - help some already pretty privileged people today, and they can probably help a lot of others tomorrow.

What makes EA EA as opposed to traditional A is exactly that it's supposed to care more about outcome rating than virtuous appearance!

I think this is a valid criticism. EA has set itself up as a bit of an inherent no true scotsman though, you could really call it 'True Altruism' if you wanted and it'd have a similar connotation, even if it isn't exactly the same (unless you're a consequentialist). There is always this, "well that's not real EA because it's not actually effective and the title says it's effective" baked in. I don't see how it's possible to demonstrate that what you are doing is effective without very abstract numbers that are too confounded and even then still very short term focused though. Add to that that my real world experience is generally more like some of the other replies down thread. Instant claims of moral superiority and righteousness with holier than thou anger at how anyone could question whether it's right to save a life or not and it's not really that much of a stretch to think that virtue signalling is often involved. I tend to prefer openly self interested ideologies for this reason, they're just more trustworthy.

I mean sure, and you'd say "well all altruism is effective, everyone is genuinely trying to help out as well as they can," I just simply don't think that's the case at all. EA as a name is an implicit insult to non-E A - and the insult is ... kinda deserved. Rationality, or rational fiction, have the same issue. As Max0r said in his DOOM Eternal review, regarding the tightly focused combat system:

"But Max0r," I hear you thinking. "That's every game ever!" Yes! Every good game ever.

A tight focus on effectiveness can assume a quality of its own - that sort of behavior can be surprisingly rare. Especially if everyone finds it too awkward to consider or admit that quality differences, possibly massive differences, exist.

help some already pretty privileged people today, and they can probably help a lot of others tomorrow.

The "probably" is doing a lot of work there. It was great when they were promoting mosquito nets. But now they're buying manor houses and getting knotted up about paperclip maximisers, it's fair to ask "so what all lives are being saved, here, exactly?"

As a person who gets knotted up about paperclip maximizers, let me just note here for future reference that we were always EA. You can find "effective charities for AI" all the way back in the early GiveWell recommendations. Mosquito nets is what we recommend to those strange people who for some reason don't see the pending apocalypse coming.

And of course, since you're giving me such a perfect setup:

so what, all lives are being saved here exactly?

Exactly. :P

I'm not an Effective Altruist. I do not particularly care to extend my circle of moral concern to most people, let alone pigs and shrimp.

Even then, this is an argument that is nonsensical if you even care to look at the behavior or policies advocated by EAs.

If all they cared about was the number of human lives saved or extended, they'd be trying to ban birth control and trying to increase the number of people who cross the tiny threshold that is a life worth living versus one that isn't, to the extent that just slicing it at neutral isn't an option. If you think they don't care about those, then they do, they've got QALY and DALY figures to prove it.

My preferences for looking exclusively after the welfare of those I personally care about or align with are much the same as yours, but I respect EAs for living up to their goals in as robustly empirical a manner as they can.

What have they saved really? More mouths to feed? Doctors and lawyers?

Every altruistic act of significance saves more "mouths to feed". Certainly, while I'm not averse to the idea, most doctors pay lip service to the notion they must treat all equally, to the extent they'd give CPR to Hitler if he showed up before them. Me? I'd shoot him, but that's the nominal aim of the profession.

I won't give them any money. I won't identify with them. But I for one am glad they exist and wish that more people would give a shit about making sure about whether or not the interventions they're trying even work, let alone work the best out of the available options.

Your post puts my pause about supporting EA into better words than I possibly can write. I've always found it... cheating, kinda?... that the entire premise of EA seems to just be brute-forcing morality and ethics by shoving as many zeroes into a number as possible. And that's how we get what you have described here, where it's easy to say that you've saved 200,000 or however many lives, but then people don't interrogate that result beyond that. People just see the number and go "wow that's a lot of zeroes so it must be good".

I guess what I'm wondering is if there's much focus put in to long-term solutions (and I don't mean "longtermism" like figuring out how to get humans to colonize the stars or whatever to maximize the number of future lives) rather than just whatever saves the most lives in the short term. For example, I was always under the impression that you can't just brute-force solving world hunger by confiscating all the world's billionaires' wealth (ignoring the fact that much of it isn't liquid and actually kinda doesn't exist and if you confiscated it then most of the wealth would just go away) and funneling it into programs to distribute food to starving populaces (ignoring the fact that this would outcompete and devastate local markets, etc.). Sooner or later, their governments would stop you, because it turns out that the reason they're starving in the first place is because their government wants them to, and there's plenty of things the government can do to get their country in a place to feed them, but they don't for various reasons. So there's a good short-term solution by just distributing as much as you can, but an actual long-term solution requires some change to the government, and a lot of focus seems to be put on the short-term brute-force way of doing things.

Downthread, @FirmWeird posits a similar scenario where the population is way beyond carrying capacity. What do you do? Feeding them makes the line on a graph go up, if you ignore that this means you'll need even more in the future (induced demand). Not feeding them makes the line go down but it results in a more stable equilibrium. "Just shove as many zeroes in as you can" ignores plenty of side effects that may or may not be desirable, almost like a paperclip maximizer.

People just see the number and go "wow that's a lot of zeroes so it must be good".

In theory you can fix this by counting QALYs instead of lives saved.

Of course, counting QALYs meaningfully isn't easy, but it is easy to come up with bad ways to count them and hard to prove them bad.

Your post puts my pause about supporting EA into better words than I possibly can write. I've always found it... cheating, kinda?... that the entire premise of EA seems to just be brute-forcing morality and ethics by shoving as many zeroes into a number as possible. And that's how we get what you have described here, where it's easy to say that you've saved 200,000 or however many lives, but then people don't interrogate that result beyond that. People just see the number and go "wow that's a lot of zeroes so it must be good".

I have the exact opposite intuition about "cheating." To me, regular charities seem like cheating, since all most people do is give money, get warm and fuzzy feelings, decide that that means they did good, and then profusely refuse to actually verify how much good they did. It's essentially stolen valor - getting all the personal benefits of appearing altruistic without doing any of the difficult work of helping anyone out which is normally implied by things like "charity" and "altruism." Which looks a lot like cheating. EA at least seems to make a gesture at having some referee system in place to detect cheaters.

The thing is, as we see in international sports all the time like in FIFA or Olympics, the best way to cheat is to set the entire system of judgment itself as corrupted in your favor, so one could probably make a good argument that EA is performing this "meta-cheating" by claiming to actually be setting up objective standards for effectiveness while actually setting up corrupted standards that lead to [charity I like] being [effective]. The tough thing for EA is, every single person, down to the individual, involved in the EA movement could be perfectly transparent and honest with perfectly good intentions, and overall EA could still be engaging in this "meta-cheating" due to the biases that all people are susceptible to, and so they have some responsibility to set up the structure in such a way as to counter and negate these biases. I think they may be failing to do this properly.

But in terms of their attempt at brute-forcing morality, this seems to me the correct way to counter the massive "cheating" that's happening in basically all realms of altruism in our lives, even down to individual relationships, where most people don't bother checking and just get the beneficial warm and fuzzy feelings through "cheating" without doing any of the work that is supposed to make someone actually deserving of those warm and fuzzy feelings.

I fail to see how supplying basic needs is worse than leaving someone to die.

Is that not the definition of completely ignoring the human?

I fail to see how supplying basic needs is worse than leaving someone to die.

Because it means that a decade down the line, some Saudi policeman is going to be tempted to drink because his job involves shooting at 'wretched refuse' to prevent it from crossing the border and milling around in Saudi Arabia and asking for handouts. Don't even ask what Europe is going to end up like.

It’s all part of Big Alcohol’s plan to break into the Muslim market.

People are social, people interact, helping a person live might increase the happiness of those around them or end up causing the suffering of those around them, they will probably do both simultaneously in varying amounts. The problem is you are trying to apply a value to an unknown quantity. Sometimes it feels progressives and by extension EA are so universalist in their beliefs that they can't even imagine a person having values that are negative, all people are inherently good but they are influenced by evil outside forces. Trump voters are mislead by Trump, religious people are misled by religion etc. People are capable of everything within the human experience from great altruism to great malice. Just saving a life without taking into account what you are saving is ignoring the human, it's ignoring what makes people human i.e. the content of their character, beliefs and culture. In a way I think it's even rejecting your own humanity as participating in the war of culture, having groups you favor over others, is part of being human. EAs come off as trying to stand apart from all that, like zoo keepers looking after the health of foolish animals.

Are you suggesting that empathy or humanity requires assessing whether the recipient is…worthy? Part of those groups you favor? Because that doesn’t square with any reasonable definition of empathy. Not Christian charity, not secular humanism, whatever.

Another way to put it. Let’s say your neighbor falls ill and you have the money to save him. He’s someone you know well, so you have a good sense of his beliefs and his relationships, even if they don’t always agree with yours. Is it more or less altruistic to save his life?

It requires assessing something. It's up to you whether or not you support people you find worthy or not, but to empathize there has to be something there to empathize with, otherwise you are just creating something fictional.

Saving the neighbor is traditional altruism. You know them you've interacted with them so you can empathize with them.

Saving x from y shouldn't be altruism, you don't know anything about them, you can't empathize with them without projecting, and not just some minimally necessary projection, you're basically inventing them whole cloth.

As to whether it's more or less altruistic, it seems it would be more altruistic to save the neighbor who shared none of your values than it would be to save a neighbor that shared your values and therefore helped to further you / your groups interests. This seems nonsensical to me though and basically just pointless virtue signalling.

edit: Another poster argued basically exactly this that the definition i'm using reduces to virtue maximizing and that actual EA would donate to people that shared their cause (the neighbor they liked) because they are about maximizing positive outcomes. I do feel like it stretches the definition of altruism though. Say some extreme narcissist that took an iq test as a kid and got into mensa or something felt that they were likely to be more capable than anyone else and therefore had the potential to benefit the world more than anyone else. Would they create an organization that aimed to funnel all resources to themselves and call it effective altruism? Maybe EA people believe this? Seems like the economy and wealth agrees with this. Are there EA groups funneling all their resources into ai to create god? Idk, maybe!

Imagine a country where the people have overexploited their environment, and there are now more people living there than the country's ecological base can support. Once they see all the starving children, famous westerners come in and throw big rock concerts to raise funds to ease their pain, and they succeed - the food gets distributed and the people stop starving. They then have children, and the population becomes even more unsustainable, which creates an even more dramatic famine 10 years later.

Is feeding those starving people the correct thing to do when enabling their population to increase more is just going to make the problem worse in the future? You have a choice between a bunch of people starving to death, or twice that many people starving to death a generation from now. Which choice is more humane?

This is a legitimate argument, though my personal beliefs lean closer to @self_made_human’s.

It’s also completely absent from the OP and from his response. He is very clear that acting on an “unknown quantity” is despicable not because one might cause more total suffering, but because it’s “discounting their humanity.” Somehow, providing basic needs for people you don’t know implies less empathy than deciding they don’t deserve your attention, or worse, that they have “values that are negative.” This is fucking incoherent.

I will consider that thought experiment to be isomorphic to this one:

Imagine you're a doctor who has a patient about to die young before having kids.

Why save them? After all, they're going to die anyway, and more importantly, they'll have kids, who are also going to die, young or old.

So it's a choice between having one person die now, versus two people die in the future.

What I can only hope is obvious is that most people value life, especially a quality life, and consider it worth extending, even if the terminal prognosis for everyone is fatal, even if they're only going to reproduce and have more people who have a bounded lifespan. Let's leave aside that I expect lifespans to become unbounded shortly, it's not relevant when we haven't solved Heat Death.

Presumably, by revealed preferences, these people you discuss consider their lives worth living, and the reason they're about to die is because they have no choice in the matter. Further, so too will their offspring,

More importantly, it buys time for more durable solutions.

Since this line of thinking would have consigned all previously starving populations in history to a shared grave with Malthus, I'm not paying it any heed.

I will consider that thought experiment to be isomorphic to this one:

Imagine you're a doctor who has a patient about to die young before having kids.

Why save them? After all, they're going to die anyway, and more importantly, they'll have kids, who are also going to die, young or old.

HOLD IT! You've committed a rhetorical sleight of hand here - it isn't the fact that people die at all that's the problem. We're talking about starving to death, which is a humiliating, painful and degrading way to die. "Death" and "Death by starvation" are different things and not really equivalent. But that's just a minor problem - you have fundamentally misunderstood the nature of the argument being made here, because your isomorphism is false.

Have you ever heard of the tragedy of the commons? The salient quality of this "thought experiment" (if you're paying attention this isn't a hypothetical but real world history) is that the ability of the environment to support life is part of the equation. You have a choice between supercharging a given population, taking them beyond the carrying capacity of their environment, or letting some portion of the population starve/die. When you pick the first option the commons gets destroyed, and the ecosystems that can support a larger community get damaged (the natural equivalent to the seed corn being eaten). When you actually take the specifics of the scenario into account, you're advocating for the destruction of the environment and mass starvation as opposed to letting a population return to a level that's sustainable in the long term. I don't think that's actually a position that you'd support - though I may be wrong.

I'm well aware of the tragedy of the commons, or Malthusian population limits.

Neither applies here.

For one, we're not Malthusian, given that there is food to feed them with. If every locale was restricted to having to feed itself, goodbye Singapore I guess?

Secondly, the behavior they're engaging in, namely having more kids or mouths than they can feed, such that they end up being naturally culled, is one that just about every population in history has been guilty of.

When I think "population sustainable in the long term", I'm contemplating Dyson Swarms and the Heat Death of the Universe. It has little relevance to the denizens of Sub-Saharan Africa, no matter how dysfunctional it might be right now.

And I don't even like them, I happen to think that the problems that they suffer from can be fixed, be it immediate calorific concerns, or the poor quality of human capital, be it by genetic engineering or otherwise. Hence why I'd rather they'd not starve to death, at least not when it's random philanthropic movements footing the bill for feeding them.

it isn't the fact that people die at all that's the problem. We're talking about starving to death, which is a humiliating, painful and degrading way to die

What exactly do you envision when I propose a doctor "who has a patient about to die young before having kids"?

Do you think the people who die at that age are choosing a particularly dignified way to go? Severe appendicitis? A road traffic accident? Bullet to the gut?

I'll tell you that I'd certainly find shitting my guts out in front of a hundred strangers to be "humiliating" if nothing else.

So I think my isomorphism works just fine, since we're talking a cause of death that can be relatively cheaply mitigated, ensuring a longer life and time to churn out the next generation.

Neither applies here.

Yes, they very explicitly do! I'm the person who came up with this "hypothetical" and I can very flatly state that it is not taking place in a science-fiction universe with AGI and dyson spheres. Instead, it takes place in the real world - human beings need to eat, and that food has to come from somewhere. You don't get to have infinite growth on a finite planet. Overfarming can damage environments, and if you overfish a lake to the point that the fish can't recover, you've permanently reduced the population your environment can support.

For one, we're not Malthusian, given that there is food to feed them with. If every locale was restricted to having to feed itself, goodbye Singapore I guess?

The world has a certain amount of overhead - but not an infinite amount. You're proposing that we spend the entirety of the world's excess on feeding more and more Ethiopians, without any care for the consequences of doing so. Why is it worth making sure that Ethiopia has more Ethiopians than it can comfortably support? Remember that we're pushing their population above the carrying capacity of their local environment - they are going to become a permanent drain on global resources and food, and the problem is going to immediately become much worse (and the total number of Ethiopians lower) the moment that access gets cut off. What happens when there's a crop failure somewhere else in the world, or a different plague/famine/war that leaves other nations reliant on charity as well?

When I think "population sustainable in the long term", I'm contemplating Dyson Swarms and the Heat Death of the Universe. It has little relevance to the denizens of Sub-Saharan Africa, no matter how dysfunctional it might be right now.

Do you walk into conversations about cars and talk about how discussing fuel economy is useless because we're going to have spaceships soon? If you want to talk about cool sci-fi novels, that's great! I mean, I like talking about them too - but a conversation about the hard ecological limits to human existence in the present isn't the place.

Hence why I'd rather they'd not starve to death, at least not when it's random philanthropic movements footing the bill for feeding them.

Either they starve to death now, or you have an even larger famine in the future with even more people starving to death, and causing further damage to the environment to boot. "We feed them now, and then when this problem returns in the future I'll just plug in my Mr Fusion and 3d print an infinite supply of burgers for all the starving people" isn't an option that's on the table! You're advocating for more suffering and a lower total population over time due to ecological destruction.

What exactly do you envision when I propose a doctor "who has a patient about to die young before having kids"?

The last time something like this happened in my social circle, it was cancer. If I was going to die at a young age, I would greatly prefer the last moments that they went through as opposed to starving to death with the rest of my family in Africa as I watch them eat the seed corn that could have helped a smaller family survive and thrive.

Yes, they very explicitly do! I'm the person who came up with this "hypothetical" and I can very flatly state that it is not taking place in a science-fiction universe with AGI and dyson spheres

Well you can see I don't find myself beholden to your strict interpretation of the hypothetical.

And leaving aside futuristic things like Dyson Spheres, we're thankfully living in the !science fiction setting where we had the Green Revolution and have industrialized agriculture. There is no shortage of cheap calories on a global level.

You don't get to have infinite growth on a finite planet. Overfarming can damage environments, and if you overfish a lake to the point that the fish can't recover, you've permanently reduced the population your environment can support.

Sure. All well and good. But there are plenty of billions on the table yet, even trillions or quadrillions just on Earth if it was truly optimized as an ecumenopolis. Fuck the environment as far as I'm concerned. If it has to give so we can have more humans around, all the worse for it.

Once again, I stress that humans have existed in Malthusian conditions for most of history, and only recently broken out of it, even if that is "temporary" compared to the limits of exponential population growth. I see no reason to think that hypothetical carrying capacity will be breached before it keeps getting raised, as has been the case for about a century or so, and for a good while to go.

Does maybe a few hundred million extra Africans in a century change much? A billion or two? Not really, and I don't expect the conditions that make them be non-self-sufficient in the same manner as most other nations to cease before they balloon and outnumber all of us.

You're proposing that we spend the entirety of the world's excess on feeding more and more Ethiopians, without any care for the consequences of doing so.

What on Earth gave you that impression?

I never advocated for the largesse of the globe heading to them. At most, to the extent that Effective Utilitarians are choosing to help feed them, I don't object to them using their funds in that manner.

What happens when there's a crop failure somewhere else in the world, or a different plague/famine/war that leaves other nations reliant on charity as well?

They get shafted, and I don't care. At that point the EAs may well decide that they're not the cheapest population to prioritize, and everyone else gets a handout. Or more likely, the EAs don't have money to spare at all.

Do you walk into conversations about cars and talk about how discussing fuel economy is useless because we're going to have spaceships soon? If you want to talk about cool sci-fi novels, that's great! I mean, I like talking about them too - but a conversation about the hard ecological limits to human existence in the present isn't the place.

Fuel economy concerns matter a great deal less when we can reasonably expect energy to get much cheaper. I support it to the extent it pays for itself, and pricing in externalities.

As for the "ecological limits", they're likely in the tens of billions with minimal change to the condition of the average human and not much in the way of major change in terms of agricultural technologies. Given that I think those are inevitable, trillions or billions.

We can worry about it when we get there, or improvements stall before population growth does.

We feed them now, and then when this problem returns in the future I'll just plug in my Mr Fusion and 3d print an infinite supply of burgers for all the starving people" isn't an option that's on the table! You're advocating for more suffering and a lower total population over time due to ecological destruction.

Why not? I invite you to show me we're near nominal capacity with even current agriculture. We are clearly not optimizing for calories over all else, as we would if we had reason to.

The last time something like this happened in my social circle, it was cancer. If I was going to die at a young age, I would greatly prefer the last moments that they went through as opposed to starving to death with the rest of my family in Africa as I watch them eat the seed corn that could have helped a smaller family survive and thrive.

I have seen a great more youthful deaths from cancer than you have. As would be expected, I work in an Oncology ward.

Let me tell you that the modal passage is not something I'd call dignified.

In other words, you're arguing with a position I don't hold, and I think you utterly underestimate the nominal carrying capacity of this globe without even going into non-existent technologies.

More comments

One thing that's always bugged my about progressivism and especially EA is that despite all their claims of being empathetic and humanistic they completely ignore the human. They are ironically the paperclip maximizers of philanthropy.

Once again for those who might just be joining us. Utilitarianism is an inhuman (and dare I say it, Evil) ideology that is fundamentally incompatible with human flourishing. Utilitarians deciding to ignore the human cost of a policy to maximize some abstract value be it "utility" or "paperclips" is not ironic, unfortunate, or unintentional. It is by design.

"Effective altruism" has never been about altruism.

I will admit I consider my self a 'skeptical utilitarian'(I made this term up, or, if I didn't, I am unfamiliar with the other usage) in that I have utilitarian leanings in terms of how to reason about morality but reject unpalatable extreme extrapolations thereof, on 'eulering' and 'epistemic learned helplessness' grounds. Still I have always found casual swipes at utilitarianism of the form, 'see, it actually leads to bad things' to be weak. Clearly the goal is to lead to good things, broadly, and if it seems to lead to a bad thing then that probably means you should try again and fully considerer the externalities, etc. I don't see a good reason why 'utility' can't be a proxy measure for human flourishing, and I would personally prefer a form of utilitarianism organized in just such a way.

Clearly the goal is to lead to good things, broadly, and if it seems to lead to a bad thing then that probably means you should try again and fully considerer the externalities, etc.

I can declare that the "goal" of a live grenade is to be delicious candy for children, but that won't make it so. The argument against Utilitarianism is 1) that it can't actually do what it aims to do, because "utilitarian calculus" is about as valid as "turnip plus potato equals squash", and 2) when it inevitably fails, it tends to fail very, very badly.

"Fully considering the externalities" is straightforwardly impossible, the output it generates is unfalsifiable, and it is tailor-made to justify one's own biases.

I don't see a good reason why 'utility' can't be a proxy measure for human flourishing

Because "utility" can't be rigorously measured, quantified, or verified in any way, even theoretically, and the whole system is built on the premise that it can be.

I should have known better than to comment on this topic here, I am not very rigorous or deep in my metaphysical beliefs.

Let me try and clarify my internal view, and if you have the time, you can explain what I am doing wrong.

So, I view my own morality and the morality of my society through a largely consequentialist lens, understanding that my ability to fully understand consequences decays rapidly with time, and is never perfect. I view morality as a changing thing that adapts and morphs with new technology, both social and physical. I find the 'concept' of 'utilitarianism' a useful jumping off point for thinking about morality. Obviously this interacts with my own biases, I am not really sure what it would even mean for a person to think about something and not have that problem honestly. I do not view 'utilitarianism' as a solved, or solvable problem, rather as a never ending corrective process.

For example, I am not currently vegan or vegetarian, but I also do not like animal suffering, and I think a lot about this disconnect. Ideally I would like a world that allows me to enjoy all the perks of animal husbandry while reducing as much animal suffering as possible. I think the effort of trying to reduce the amount of suffering in factory farming, reflects a 'utilitarian' effort, but that does not mean I would agree with any particular reality those intuitions suggest. If for example, reducing animal suffering, made it impossible for a lot of people to afford meat or eggs, then that also seems bad, and is another part of the problem to keep working on or striving for solutions to.

My biases manifest in a number of ways, for example, I lean towards observational data in terms of what a better or worse world would look like, so for example, if a particular religion espoused the idea that animals enjoy animal husbandry and or they can only go to heaven if eaten by humans, I would not factor that into my considerations. I also tend to think suffering is bad and happiness and a fulfilment/satisfaction are good, etc.

I guess I view 'morality' as a system or framework that I use to try and evaluate my own actions and the actions of others. I am reliant on the persuasiveness of my arguments in favor of my preferred outcomes to drive other people (and sometimes myself) to respect or adopt a 'morality' similar to my ideals.

Well said.

For what it's worth, I largely agree, to be more blunt than you, I'm both a moral relativist and a moral chauvinist. I make no claims that my sense of morality is objective, and go so far as to say that there's no such thing, not a single good reason to imagine it can be so, that morality can be disentangled from the structure of an individual observer and forced to fit all others. The closest you can get is the evolutionarily/game theoretically conserved core, such as a preference for fairness and so on, which can be seen in just about any animal smart enough to hold those concepts. That's still not "objective". That doesn't stop me from thinking that mine is superior and ought to be promulgated. It's sometimes tempting to claim otherwise, but I largely refrain from doing so. I don't deny the rights of others to make such a claim about theirs, to the extent that I approve of free speech.

Of course, I personally find that I can decompose my value judgements and then derive simpler underlying laws/heuristics that explain them, which often explain new and complicated situations, but I'm lucky enough that I have yet to find one I can't resolve in that manner, and I can see that I have principles instead of a lookup table because it can often involve me grudgingly accepting things I dislike because to do otherwise would conflict with more fundamental principles I prefer to hold over mere dislike. That's why I'm OK with people I despise speaking after all, leaving aside I have no way to stop them.

As for animal welfare, I simply do not care. It's a fundamental values difference. I don't get anything out of torturing or killing subhuman animals, but I also have nothing against those who do, to the extent that cultural pressures imply that that those who shirk them have other things wrong with them, like psychopathy. As discussed in an older comment, at a point in time, most people enjoyed watching dog fights or throwing rocks at cats, there was nothing/little in the act that was inherently psychopathic in terms of harming others.

To illustrate, imagine a society that declares shaving one's head to be a clear sign of Nazi affiliation. There are plenty of normal people who have some level of desire to do so, be it for stylistic preferences or because they're balding. But since such an urge is overpowered by a desire not to be mistakenly labeled as a Nazi, they refrain, while actual Nazis don't.

Congratulations, you managed to establish that shaving one's head is is 99% sensitive and specific for National Socialist tendencies.

You can see this kind of social dynamic and purity spiraling all over the place, and I think animal welfare is one of them, so is not calling people fags or retarded.

I do not value the elimination of factory farming for its own sake, or that of animals, but I will happily accept something like vegetarian meat or, better yet, labgrown meat, over it, but if, and only if it's superior to factory farmed or slaughtered meat in terms of taste or price, ideally both. That's what it means to be truly neutral between them.

Only if you assume "utility" is decoupled from human flourishing. Which it shouldn't be.

"Effective altruism" has never been about altruism.

Oh? What's it about, then? Bonus points if your criticism applies specifically to EA, and not just to any action that might vaguely leave room for self-interest.

Only if you assume "utility" is decoupled from human flourishing. Which it shouldn't be.

And yet, It very obviously is.

Oh? What's it about, then?

Grift. Silicon Valley sociopaths trying to rebrand slacktivism (that is, "earning to give" and "raising awareness") as a public good rather than what it actually is. Funneling funds into their own pet projects. *cough* AI Research *

As I've discussed before, back in late 2013/early 2014 timeframe I approached a few of the more prominent EA types and offered my services. I had contacts in the DoD, MSF, and multiple eastern African Governments. I could have actually helped with the nitty-gritty of getting bed-nets and water-filters distributed to people. The response I got was that they weren't really interested in logistics so much as they were interested in raising money.

This might not be a failing unique to Effective altruism, but I do think it's enough to condemn them.

The whole vegan menu fiasco later that year only confirmed it.

The argument is that despite some of the questionable things EA has been caught up in lately, they've saved 200 thousands lives! but did they save good lives? What have they saved really? More mouths to feed?

Yep. Some of those "mouths to feed" might end up becoming doctors and lawyers, but that's not why we saved them, and they would still be worth saving even if they all ended up living ordinary lives as farmers and fishermen and similar.

If you don't think that the lives of ordinary people are worth anything, that needless suffering and death are fine as long as they don't affect you and yours, and that you would not expect any help if the positions were flipped since they would have no moral obligation to help you... well, that's your prerogative. You can have your local community with close internal ties, and that's fine.

More cynically I think this sort of caring is just a way to whitewash your past wrongs, it's pr maximizing, spend x dollars and get the biggest number you can put next to your shady bay area tech movement that is increasingly under societies microscope given the immense power things like social networks and ai give your group.

I don't think effective altruism is particularly effective PR. Effective PR techniques are pretty well known, and they don't particularly look like "spend your PR budget on a few particular cause areas that aren't even agreed upon to be important and don't substantially help anyone with power or influence".

The funny thing is that PR maximizing would probably make effective altruism more effective than it currently is, but people in the EA community (myself included) are put off by things that look like advertising and don't actually do it.

Yep. Some of those "mouths to feed" might end up becoming doctors and lawyers, but that's not why we saved them, and they would still be worth saving even if they all ended up living ordinary lives as farmers and fishermen and similar.

If you don't think that the lives of ordinary people are worth anything, that needless suffering and death are fine as long as they don't affect you and yours, and that you would not expect any help if the positions were flipped since they would have no moral obligation to help you... well, that's your prerogative. You can have your local community with close internal ties, and that's fine.

and some of them will become rapists and murders. Maybe they already are. Have you stopped to check? Are they worth saving as well despite the harm they have done / will do?

Of course I wouldn't expect a stranger to help me. I'm arguing that it's not possible after all. In retrospect even people that do know and care about me have had some pretty spectacular failures on that front, though I don't blame them as long as they forgive me my own.

Death is necessary. We live in a world with physical limits, without death the resources eventually run out. Most of life from the realm of the microscopic to the complex workings of human society is just the process of determining what is worthy of those limited resources. When the determination is subjective we call it morality or justice and when it's objective we call it nature.

It seems trivial to me that human lives aren't worth saving per se. It's the content of those lives that matters, and if you don't know the content than you can't prove that you've done anything of value let alone something "effective." I mean if you had the choice between saving 1000 lives of people in a persistent vegetative state, or a dozen lives of people you know to be good and functioning people you choose the functioning people right? It's not the lives that matter it's the person, the content. If you could have more people living by putting everyone in a low energy state in some kind of feeding pod, where they undergo minimal activity to reduce calorie expenditure and just enough calories are provided to keep them alive is that good because more people are living? It seems cartoonishly evil.

and those are just overly simple demonstrations, in reality the world is more complex than that. Value is a human thing and though nature occasionally forces our hand the more advanced we get the more leeway we have to be subjective. There really isn't even a way to maximize value because people have different values and therefore competing interests.

That's the problem I have with EA. The whole, "we're saving more people than anyone" thing. Stopping needless suffering. Why is their suffering needless? Suffering can be important, it teaches us things. It leads to improvement. When you are saving them what are you saving? Do you know any of them? It's so surface level and such a philosophically empty paperclip maximizing type ethos.

I do agree that it hasn't been very effective PR for the tech bros so far. I think it worked better for progressives (though people are growing resistant to it) and EA seems to be a silicon valley version that has made the whole process too efficient and made it's contradictions too apparent. It feels too inhuman for most.

and some of them will become rapists and murders. Maybe they already are. Have you stopped to check? Are they worth saving as well despite the harm they have done / will do?

This is a retarded standard that nobody who has to work with more than a handful of people at a time holds. Do you think doctors look up new arrivals to the ER to ascertain whether they're accidentally treating murderers and rapists?

It's the net impact that matters, and unless you're exclusively attempting to save the denizens of a prison, or maybe Hamas, you will find almost no population where they predominate, such that by saving the entire lot you've done something worse.

I mean if you had the choice between saving 1000 lives of people in a persistent vegetative state, or a dozen lives of people you know to be good and functioning people you choose the functioning people right? It's not the lives that matter it's the person, the content. If you could have more people living by putting everyone in a low energy state in some kind of feeding pod, where they undergo minimal activity to reduce calorie expenditure and just enough calories are provided to keep them alive is that good because more people are living? It seems cartoonishly evil.

Great. An accusation that of all the people in the world, EAs don't know the concept of disability adjusted life years (DALY) and quality adjusted life years (QALY).

That's the problem I have with EA. The whole, "we're saving more people than anyone" thing. Stopping needless suffering. Why is their suffering needless? Suffering can be important, it teaches us things. It leads to improvement. When you are saving them what are you saving? Do you know any of them? It's so surface level and such a philosophically empty paperclip maximizing type ethos.

I will go with the "good things are good, and bad things are bad, actually" over this galaxy-brained advocacy for letting people starve to death or die of malaria.

I'm sure those are all laudable, character building exercises.

I'm not an EA, I just think that of all the people I strongly disagree with, they're doing what they believe to be right thing with the right amount of rigor, as opposed to nothing but vibes.

I will go with the "good things are good, and bad things are bad, actually" over this galaxy-brained advocacy for letting people starve to death or die of malaria.

Which is a lazy dismissive assumption. You have faith that lives are good or that they are in aggregate good and therefore maximizing them is positive, you don't know that. As far as I can tell you can't know that.

I'm not arguing against helping people, just that helping people you actually know is better, especially en masse (what if everyone logged off social media and did that?), than industrial philanthropy or w/e.

You are welcome to demonstrate your conviction that lives are terrible and worth terminating on average, as they must be if the aggregate is, but I suspect you can't, for the same odd reason most antinatalists or misanthropes don't start with themselves.

You have faith that lives are good or that they are in aggregate good and therefore maximizing them is positive, you don't know that. As far as I can tell you can't know that.

Faith? Why? I can clearly see that most people have lives worth living and extending, at least if it comes to the expenditure of funds I can't repurpose for things I personally care about more. To the extent that governments and charities spend their money on that, I'd prefer they save as many lives as cheaply as effectively as possible, and EAs do that. Would be even better if they handed all the cash to me, but since there's no advocacy group for the same, I'll take it.

Go ahead and help whoever you like, if you care to. By the same process where you don't care about most people, I don't particularly care about you and yours, and thus EA beats you in terms of net people I minimally consider worth existing saved. Sure, sucks that a large number of them are Sub-Saharan Africans with low IQs I suppose, but that's hardly all of them, there is a non-zero tradeoff for the same with Westerners or any other kind of human really.

ah yes, "KYS" nice to see the motte's standard of petty insults in as many words as possible is still around.

I mean it's more that it's quite obvious that "kys" is bad advice for you, so maybe you should examine the reasons why it's bad advice for you and see whether they're also true of a random farmer's kid in Mali.

and some of them will become rapists and murders. Maybe they already are. Have you stopped to check? Are they worth saving as well despite the harm they have done / will do?

Yes. Is this supposed to be a trick question? "Some people in a group might become rapists, or might even be rapists, and thus most of the people in that group should get malaria and maybe die of it" is the sort of position a children's cartoon villain would hold. If that's your sincere considered position based on the things you have seen online, I suggest touching grass.

I think it worked better for progressives

Most EAs are sympathetic to progressives, but most progressives are vehemently opposed to EA ideas like "you can put a dollar value on life" and "first world injustice doesn't matter much compared to [third world disease / global extinction risk / animal suffering, depending on exactly which EA you ask]".

It feels too inhuman for most.

I am aware of that. I think most EAs are aware of that. The question is, is the marginal discomfort of a few people feeling more inhuman than they otherwise would worse than a few kids in Mali dying of malaria when they could have lived.

Still fits with my theory. EA like the progressive model but are a bit robotic and misunderstand it. Progressive recognize that EA is pulling a lot from their PR scheme but doing it poorly and spoiling the effect.

I am aware of that. I think most EAs are aware of that. The question is, is the marginal discomfort of a few people feeling more inhuman than they otherwise would worse than a few kids in Mali dying of malaria when they could have lived.

There's more of a trade off than that though. That money and effort could be spent elsewhere, making family or people you know about happier. I mean if they don't have anyone like that they could at least look towards their local communities? From what I've seen of the bay area it could use it.

Scott argued against cash bail (Soros DAs position), argued with war with Syria over chemical attacks (probably false flags), argued that overthowing Libyan government was likely effective altruism. He's a great writer but has terrible instincts at the end of the day.

Huh? Are you saying the dreaded Soros DA argued against cash bail, or for it?

More importantly, why is "(Soros DAs position)" supposed to convince me that something is or isn't a terrible instinct? I agree with stupid, uninformed, or contrarian people all the time.

For what it's worth, I never found the Syria false flag argument convincing, either.

Haven't you noticed a crime wave happening because of soft on crime policies in places like California?

In Syria, US was backing jihadist terrorists against a secular government. Probably the only reason there are any syrian christians left is that Assad won the civil war.

Libya was a complete disaster.

In conclusion, if you want global destabilisation and rampant crime, listen to Scott Alexander.

You didn’t answer any of my questions…

I actually haven’t personally noticed the crime wave, as I live in a more sane city. I believe that there is one, but I also remember that correlation isn’t causation.

The religious character of Syrians has nothing to do with whether or not the attacks were a false flag.

Of course law enforcement policy influences crime levels, it's the whole point.

My point was that whole Syrian civil war was a huge debacle in general, I wouldn't be in favor of further US involvement even if chemical attacks were by Assad's forces. They served no strategic purpose other than possibly justify western escalation, question as always is - who benefits?

Only for unhealthy minds, I think? Whether freeing slaves "reduced" the position of non-slaves is a question without an objective answer - only psychological interpretations. For instance, many Indians never eat meat and would tell you they don't feel "reduced" by this.

I'm sorry, are you saying that everyone who isn't Indian is an "unhealthy mind" or are you saying that everyone who eats meat is? This entire bit is confused as hell - quality of life is often psychological, and having meat taken out of my diet for the benefit of animals sure feels like an objective reduction in my quality of life to improve theirs. Status too is objectively changed - if I am not allowed to eat animals then by necessity this indicates an increase in the position of animals and a reduction in my status - from dominion over the beasts of the land to a sad sack of shit who gets less respect than a pig.

And I see no charitable justification for inserting that analogy to slaves, only a cheap appeal to emotions - it didn't improve the clarity of your point, if anything it obfuscated it, since you immediately went straight back to talking about animals.

I think there’s some confusion here over “reducing the position of…humans.”

@KnotGodel was arguing that vegetarianism isn’t inherently low-status, as evidenced by the hundreds of millions who choose it even when meat is available. Therefore advocating for it does not require reducing the status of meat-eaters.

You correctly observed that forcing meat-eaters to stop is obviously reducing their material position and status. Animal rights advocates might well be expected to implement such a reduction.

Frankly, I think the blame here lies with the guy who insisted animal rights was an “identity movement.” Putting it in the same category as woke politics, white nationalism, or civil rights feels like a poor choice.

I'm saying psychologically health people don't see status as zero-sum.

Are you suggestion that subsisting on an exclusive diet of psuedo-marxist po-mo bullshit might not be psychologically healthy? ;-)

Any feeling full stop really. Any cognition at all in fact. I'm actually only capable of engaging with reality using my brain, I didn't realise that made me psychologically unhealthy.

Actually I think you need to define psychologically healthy, because you don't seem to be describing it in my eyes. You also don't have to feel like you are losing status if I fuck your wife in front of you, or force you to blow me, but I would suggest not doing so demonstrates a lack of self respect (or a fetish, if they can be separated) not good psychological health.

Is that a dodge, or are you actually saying that you wouldn't feel like you lost status if I banged your wife in front of you? Because I wouldn't consider the status loss the biggest problem in either of those scenarios, but I would still consider it a problem.

I get the impression that you have a warped understanding of psychological strength. Status very often - if not always - is zero sum. To be the most popular or most hated requires that someone else is not occupying that spot - if they are, you have to take it from them (otherwise you are not the most popular/hated). Being psychologically healthy is not ignoring attacks, or being apathetic to them, or writing your pain off as an artifact of your brain, it is (assuming fighting back isn't an option) enduring the suffering without being broken by it. That doesn't mean it doesn't affect you or hurt you. I don't know what the psychologically healthy way to respond to either of those scenarios would be, but I'm sure it's not a thumbs up or yawn or intense rationalisation. Those strike me as closer to denial than anything else.

Status is a person's placement in a social hierarchy. Most people don't think in terms of status, they simply feel shame or embarrassment when it is taken from them or pride and confidence when they take it. You don't need to think about becoming more popular by taking it from others - simply by being more popular you do so inevitably. Just because it isn't a concious effort doesn't mean you don't care about status.

Re banging your wife, we can add your peer group to the dynamic - do you think their opinion of you would change at all if I banged your wife in front of them? It might not affect their opinion of your competence, but I bet it affects their respect for you - but status is an element even between the three of us original parties - you me and your wife. If you walked in on that what would you think my opinion of you was? Would it be different from before you entered the room? What about your wife - if you saw that would you immediately assume she loved you as much as she did on your wedding day? If not, you do care about status.

More comments

I wonder if your wondering is done in good faith 🤔

What does that mean? That you don't think it's true? That you think it's true but it's inconvenient for someone to point it out? Please be specific.

Do you have evidence EAs suffer from "extreme self confidence"?

Have you heard of a guy called "Sam Bankman-Fried?" He was in the news a little bit lately.

It's not just "a single guy in finance" it's a whole mess of people falling into the exact failure mode that critics of their approach predicted they would, and then trying to argue that said failure shouldn't be taken as evidence that their critics may have had a point.

The term as a whole is stupid because almost every single person who operates a charity or is a large scale philanthropist sincerely believes they are engaged in “effective altruism”. Whether it’s Hobby Lobby types giving it all to whatever Evangelical church they belong to or Alex Soros funding justice reform think tanks and progressive DA candidates, they’re all believers in ‘effective altruism’. If they believed their altruism was ineffective they wouldn’t do it, and if they believed their motivations weren’t altruistic they’d presumably keep the money for themselves and those they cared about, or simply pursue naked political lobbying (which I’m not saying the above don’t do, to be clear, but I genuinely think they also think they’re helping ‘the world’ along whatever course they believe is best).

If they believed their altruism was ineffective they wouldn’t do it, and if they believed their motivations weren’t altruistic they’d presumably keep the money for themselves and those they cared about, or simply pursue naked political lobbying

I'm on net pretty neutral with respect to EA, but I don't think this line of criticism makes sense. To some extent, it's true that everyone who engages in charity do so out of belief that they're effective and that they're altruistic. But believing that you are those things doesn't tell us anything about if you/your charity actually are those things. And where I think EA at the least makes gestures at doing (and they might do nothing more than those gestures, let's be clear) is checking if they really are effective (they do seem to have a big blind spot in checking if they really are altruistic - believing that you're altruistic is, at best, a neutral signal and most likely a negative signal of one's altruism, and I don't think I've seen EA engage with this).

I think there's a strong argument to be made that, in their attempts to check if their (self-perceived) altruism is effective, all they're doing is adding on more epicycles to come to the conclusion that [charity they like] also happens to be [the most effective]. I honestly don't know enough about the logistics of what EA does, but certainly that should be the default presumption, sans clear indication that they're doing the hard work needed to check all that, such as giving oppositional people full access to all the tools to make the strongest argument possible against whatever charities they like (or for charities they dislike). And the more popular/decentralized EA is/becomes, the more that EA people will follow this default pattern of convincing themselves that [charity I like] is [the most effective] because memes like this always get implemented in the laziest, most intellectually dishonest way when spread out among a wide/decentralized populace.

I would also say, given that we know this pattern about the populace, EA has, in some real sense, the responsibility to craft their memes such that if they get out to the wider populace and actually become popular, that the people who lazily implement these memes in dishonest ways don't fall into this extremely common trap of matching [thing I like] with [good] while building up a whole facade of pseudoscience/pseudomath in order to justify it. I'm not sure EA is very concerned with this at all, and I'll admit that the defensiveness I see from EA when they're criticized both about their core mission and about their more superficial PR aspects doesn't make me optimistic.

The term as a whole is stupid because almost every single person who operates a charity or is a large scale philanthropist sincerely believes they are engaged in “effective altruism”.

I don't see how anyone can closely look at real-world charities and believe this. The charity world is full of organizations that transparently don't think about effectiveness at all. The Make-a-Wish foundation doesn't run the numbers and decide it's better to grant a wish for X dying first-world children than to save Y first-world children or Z third-world children from dying, they don't consider the question in the first place. Yes if you dilute "effectiveness" to "think they're doing good" they do think that, but they don't actually try to calculate effectiveness or even think about charity in those terms. And that's by many metrics one of the "good" charities! The bad ones are like the infamous Susan Komen Foundation or (to pick a minor charity I once researched) the anti-depression charity iFred. iFred spends the majority of donations on paying its own salaries and then spends the rest on "raising awareness of depression" by doing stuff like planting flowers and producing curriculum that nobody reads and that wouldn't do any good if they did. Before EA the best charity evaluation available was stuff like Charity Navigator that focuses on minimizing overhead instead of on effectiveness. That approach condemns iFred for spending too much money on overhead instead of flower-planting, but doesn't judge whether the flower-planting is effective, let alone considering questions like the relative effectiveness of malaria treatment vs. bednets vs. vaccines.

Even within the realm of political activism like you're focusing on, such activism is often justified as trying to help people rather than just pursuing the narrow political goal as effectively as possible, opening up comparisons to entirely different causes. As EA discovered, spending money trying to keep criminals out of prison is less efficient at helping people than health aid to third-worlders even if you assume there is zero cost to having criminals running free and that being in prison is as bad as being dead. You can criticize the political bias that led them to spend money on such things, but at least they realized it was stupid and stopped. Meanwhile BLM is a massive well-funded movement despite the fact that only a couple dozen unarmed black people are shot by police per year (and those cases are mostly still stuff like the criminal fighting for the officer's gun or trying to run him over in a car). Most liberals and a significant fraction of conservatives think that number is in the thousands, presumably including most BLM activists. It would be a massive waste even if it hadn't also reduced proactive policing and caused thousands of additional murders and traffic fatalities per year. That sure sounds like a situation that could benefit from public discourse having more interest in running the numbers! Similarly, controversial causes like the NGOs trying to import as many refugees as possible aren't just based on false ideological assumptions, but are less effective on their own terms than just helping people in their own countries where it's cheaper. The state of both the charity and activist world is really bad, so there's a lot of low-hanging fruit for those that actually try and any comparison should involve looking at specifics rather than vaguely assuming people must be acting reasonably.

The term as a whole is stupid because almost every single person who operates a charity or is a large scale philanthropist sincerely believes they are engaged in “effective altruism”.

Not really, many have not even thought to consider effectiveness. Or optimize for things like tax avoidance or PR (many charities run by companies). Or for rent seeking.

If they believed their altruism was ineffective they wouldn’t do it

Is that realistic? There are social, financial, and knee-jerk reasons to be involved in charities. Many of these are pretty decoupled from the Effective Altruist™ sense of efficiency. You end up with organizations that send most of their money to administration and don’t bother to quantify their actual impact, because people will donate to them anyway.

It’s not like EA is immune to that sort of exploit. But caring (or pretending to care) about it is a pretty good distinguishing feature!

most effective

By whose standards?

By their's, because they are the ones doing it.

Yes, subjectivity exists in all human actions. No, it is not inciteful to point that out.

Of course, but that's why it's such a ridiculous concept in the first place. Objective measures like overhead and percent given to a cause etc are good to track, but whether or not it's a "good" cause is completely subjective so what they are really doing is just funneling money to causes they support, which is more or less what everyone else does.

Are you perhaps thinking of CharityNavigator (which tracks things like percentage of donations that actually go to the ostensible cause) instead of GiveWell (which tracks things like expected impact of the donation in terms of the metric the organization is supposed to be helping with)?

The latter - that impact is more important than intention or purity or self sacrifice - is the place where EA distinguishes itself from normal charitable people. Normal people are pretty altruistic, but they're not necessarily strategic about it, because most people are not strategic about most of the things they do most of the time, and particular are not strategic about things that don't significantly affect them and where they will probably never get feedback about whether their approach worked.

The most effective for their specific goal, which is some form of Peter Singer human-centric utilitarianism in which projected saved human lives (or projected bonus human life years) are maximized. And likewise, every other charity is just optimizing effectiveness for a specific goal, some Christian charity dedicated to banning abortion is usually happy to switch method to boost efficiency.

The term as a whole is stupid because almost every single person who operates a charity or is a large scale philanthropist sincerely believes they are engaged in “effective altruism”

I honestly don't think this is true. A lot of people who start charities choose a cause that has impacted them personally with little thought to whether this is a cause where dollars go the furthest. EA means more than just not actively trying to waste your donation. It means giving rigorous thought to the tradeoffs involved.

The only rigor required is whatever bullshit statistical model EAs design to ‘prove’ that their approach technically saves 2.07% more lives than something else.

Consider that EAs spend a lot of money on AI doom research, how do they calculate that this is more effective at saving lives than malaria nets? I’m sure some LessWrong autist has done ‘the math’, but it essentially amounts to a sincere belief that the chance of Yudkowsky saving the human race by coming up with thought experiments outweighs the lives saved by putting the money into nets. There’s nothing empirical to that tradeoff, the Christians likewise believe they’re saving x lives from damnation, Soros might well believe he’s saving x lives from police brutality, what do EAs do differently?

Consider that EAs spend a lot of money on AI doom research, how do they calculate that this is more effective at saving lives than malaria nets?

I think that realistically speaking, they don't. Mathematical arguments that attempt to make predictions about AI risk cannot avoid running into the "garbage in, garbage out" principle. The data is simply not there to make any good predictions on this topic. And there is no way to obtain the data without clairvoyance.

I do believe that effective altruists are probably, however, much more rational when they address more well-understood topics such as the characteristics of various diseases when compared to one another.

In general all non-trivial long-term utilitarian arguments about human society are nonsense. For example, you might save 10000 people from dying in a flood next year, but then one of them turns out to be the next Hitler and kills 10000000 people.

I couldn't agree more. Bay Area Rationalists remind me of this meme when it comes to naming things: https://img.ifunny.co/images/e5402c3312546aa012fd661f686af8fd58cb8158d29db92ac5f0f1e3617bfa12_1.webp

The whole name Rationalists rubs me the wrong way for a variety of reasons. It comes off to me like they think they have a monopoly on Rationality, and reading LessWrong hasn't changed my mind. Almost everyone thinks they are being rational.

I remember some LessWrongers, back in the day, who preferred “aspiring rationalists.” Unfortunately, “aspies” was taken.

Either way, I can’t agree that picking a name equates to claiming a monopoly. The Stoics didn’t think that only they could be stoic, but that they were going to make important decisions according to a certain set of principles.

Obviously everything we know about effective altruism- that it’s a moral crusade by California based atheists- tells us it’s basically a left wing project and probably doesn’t have the capability to keep its own left fringe in check. And everything we know about its ideology points to it wanting drastic changes. But I think you’re overstating the threat. A bunch of silicone valley nerds might be able to do some things in California or Oregon which turn out to be bad ideas. They will not wind up having a dictatorship to enforce their goals. Far left revolutions require the proles to throw in with them and proles on the left in the USA have their own political machines representing them which are consistently moderate. In the event of a revolution-inviting legitimacy crisis the US might splinter but it’s not going to have rationalistsheviks forming a government.

Effective altruism is full of utopians, radicals, and iconoclasts. One of its founding assumptions is that the traditional model of charity—such as that practiced by churches—is awfully inefficient. It is a movement which likes to look for twenty-dollar bills lying on the sidewalk. Also, it’s centered on California. Of course it’s full of liberals!

So why is this a bad thing?

Compared to the average charity, I expect one favored by EAs is probably more transparent and less likely to waste your donation on something you didn’t want. Look at GiveWell, which specifically avoids recommending efficient charities if it can’t be sure that they’re transparent. Donate to one of their top picks, and you can have high confidence that your money is actually going to preventing whatever horrible disease or deficiency it claims to target.

Or consider Scott’s kidney donation train. More people donated kidneys which wouldn’t otherwise have been donated. I don’t see how encouraging that can possibly be perceived as advancing liberal interests.

I guess I’m left asking: what does “substantial confrontation” look like to you? What should someone do differently, once armed with the knowledge that some liberals also like efficient charity?