This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
Disclaimer: I have only started reading MacAskill. So far he seems worse than reviews like this indicate, but predictable from them.
Utilitarianism-longtermism-EA cluster is filled with smart and conscientious people. It's unfairly maligned and strawmanned, attacked with hot takes that are in fact addressed in their texts; what happens here is worse – the case for partiality is not even made. Obviously longtermists, wokes and trads compete for resources in the present, so they have to do politics, and politics mean pandering to the masses with emotional language, so their really pretty different moral systems find not-so-different expressions. Duh. And nerd-shaming is just a more primitive political angle.
The lack of charity can be defended by arguing, like I do, that refined and defensible longtermist positions are expected to collapse into grotesque totalitarianism, under the double whammy of minimizing-risks and ends-justify-means drives. We know how this happens to utopian projects in practice. It's not enough to claim that you've noticed the skulls either. Maybe you've noticed the wrong pile.
But there's a more direct critique. Simply put it's that we are entitled to be represented in the future – personally, or by proxy of our values and heirs; and EA-Utilitarian-longtermism does not serve this purpose well.
There are two main arguments for that.
First, it's that conventional morality is dependent on some form of reciprocity. Yet vanilla longtermism does not imply acausal trade, timeless Universe and all other sorts of weird Lesswrongian belief systems. The present and the future are not ontologically equal: we have power over the hypothetical them, and even if future people matter at all, saving a child drowning today seems to be agreed to be more important than saving a child who might come to exist tomorrow (if you have to choose). The past and the future, as far as we know, do not exist: causality only happens in the moment, and sans our engineering of persistent causal chains, there will be no reason for inhabitants of the future to reciprocate our goodwill by, say, continuing to care about things we have sentimental attachment for (or even our own frozen bodies waiting to be awakened. Indeed, MacAskill is hostile to cryonics, on grounds of preventing «value lock-in»). We, too, already display indifference and often contempt towards our ancestors. In all the history of European thought, only Chesterton spoke in passing of engaging with them as equals («tradition as the democracy of the dead»), and the founder of Russian Cosmism Nikolai Fyodorov alone called for their literal rescue. No matter what MacAskill owes the Future, we have little reason to expect that The Future will believe it owes anything to us. This moral issue is not negligible.
Second, continuing, it's this meme justifying the consumption of meat with an example of Nazi chicken. Or less opaquely: moralists often only want the deserving to get utility, and value utility received by the undeserving negatively.
Who is deserving? Maybe children, okay. Children are presumed to be without sin. Nonsense of course, they can be terrifying bastards (as anyone who's seen animal abuse by shittier kids can attest), but even granting this convention – children grow up into adults. And for a longtermist, there is no reason sans rhetorical to prioritize the childish phase over the longer adult one in a given future sentient. Suppose, MacAskill says, a child cuts himself (Levin in Scott's review deviously writes «herself») on the shards of a glass bottle you've dropped. What if that's a future evil dude though? I'd feel way less bad about his suffering. Now, what if it's the father-to-be of a guy who'll switch off your grandson's cryocapsule upon reading the latest research showing that ameobas experience quale more intensely than mid-21st century bigots and thus deserve the limited joule budget? He can trip on the pool of his blood and slit his throat on another shard for all I care. What if it's just a child who'll grow up to be some future superNazi, already taught to hate and ridicule everything you have ever stood for?
And in a way, this is exactly the type of a child MacAskill envisions, because he believes in Whig history (like a certain Devil) where subsequent societies tend to be more moral than preceding ones to the point of complete disconnect.
For example, Pagan Romans were monsters by his standards. Excepting maybe a few classicists, we must have a poor idea of the ancient Roman intuitive day-to-day morality. We'd be abominations to them, and not because of not owning slaves or respecting women or some such, but for reasons incomprehensible to us, orthogonal to our concerns. Like the terrifying flatness of our spirituality, our communities lacking ancestral gods and multigenerational familial cults; our supposedly lofty ideals and universal religions could be akin to eating bug slop after grandma's cookies in their eyes. Did we truly only gain in our ethical knowledge since then?
In any case, from an unbiased timeless perspective, I wouldn't be able to condemn Romans for trying to «value lock-in» their domain. They did not owe us anything; they owed everything to each other, their gods and values of their polities.
A society that'll consider us living people abominable can emerge. But I'd really like for a society that's trivially moral and aesthetically pleasing by my standards to exist as well. What's needed for that is not generic future people but specific aligned people, carrying specific ideas (probably grounded in their innate biases) that allow for the preservation of such a society, to exist as an uninterrupted line into eternity – maybe bending, evolving, but at every point being able to largely determine the next one. And they need some capabilities too.
Total human extinction is a big deal for a big-tech-adjacent upper-middle class longtermist in the Bay Area (who feels that only a deep purge of Earth crust would get to him specifically), but for me, the very likely extinction of my moral line is about as bad.
Horrible and self-centered as it sounds, this looks like a more sane and also mainstream moral position.
By the way, Locklin asserts, fairly or not:
Not sure how this compares to the AGI misalignment risk (that is, the risk that comes from the existence of AGI not controlled and aligned by those SV types). Probably EAs do have to factor the «are we the baddies or enabling baddies?» somewhere in their moral calculus too. But not all baddies are visible to the polite discourse.
I want to emphasise the bit about Fyodorov. MacAskill says: “Impartially considered, future people should count for no less, morally, than the present generation.” and “Future people count. There could be a lot of them. We can make their lives go better.” etc. Do they count more? Scott says this conclusion is inevitable going by the numbers. Compare to this excerpt («question on brotherhood», 1870-1880s):
I think this is deeper than EA. So, the future is now. Forget Fyodorov's naive dreams of reversing entropy and populating the stars with everyone who's ever lived – in a century, pretty much nobody gave a rat's ass about cryopreserving people at scale (like me, EY is very angry about it). MacAskill never makes the obvious symmetric point that past people count too and, again, apparently would rather have nonillions of future people die so that better ethics «evolve».
Really not cool of us.
Ooh man. I trust there's probably more nuance here than made it into my brain, but if one is entirely opposed to "value lock-in," how can one even call oneself an "effective altruist?" Not "altruist," because altruism itself may be discarded given sufficient moral evolution (unless you're locking in values) and not "effective," because without a goal (without real, lasting values) what is there to be effective about?
And even if one does believe that the future will necessarily be more moral than the present, I would rather let the people of the past be convinced like everybody else then-alive rather than be sentenced to be swept away by the onslaught of years. The great and good of today don't otherwise approve of winning moral arguments by putting their opponents to death, and that's an attitude I rather prefer.
More options
Context Copy link
Scott Locklin is always worth to read, add his blog to your links, if it already isn't there.
https://scottlocklin.wordpress.com/
OFC, like everyone, he has major blind spots (in his case it is, amusingly, Russia - he goes full on "big manly Russkies are REAL MEN who ride bears while Americans are gay, Russia is the future, what is left of real American men should move there ASAP")
Failure of cryonics to take up is not due to civilizational failure, it is fault of cryonicists themselves.
These nerds have no idea about marketing and PR and struggle to sell cryonics to anyone than (minuscule number of) other nerds, failing to persuade even their own families.
https://web.archive.org/web/20090511124543/http://depressedmetabolism.com/is-that-what-love-is-the-hostile-wife-phenomenon-in-cryonics
Imagine if they targeted at first Hollywood celebrities and oligarch types, people with giant piles of cash and even bigger ego, people who do not doubt even for a moment they deserve to live forever.
Imagine alternate world, where every new rich type is showing up at fancy parties:
"Yes, this is my keychain. Here, key of mountain house in Alps, here key of seaside house on Azure Coast, here, key of my Lamborghini. And this metal tag? This means I get to live forever, while you will die and rot like dogs."
In this world, cryonics will be major political issue.
"Why should only the millionaires live forever? Cryonics is human right! Tax the rich to freeze all!"
"Shut up, commie! Why should hard working tax payers pay for eternal life of lazy losers like you?"
Seeing as cryonics is taken to be extremely cringe but wealthy people do want to live forever despite cringe (and are ruthlessly mocked for this in OP's link to Unherd, and from the left too, and from whichever other side, and in fact tend to pay lip service to the idea of death being good), I find your assessment lacking. There is some powerful ideological pressure against personal long-termism. Explaining it away with nerds being lame and inept is not good enough. EA is nerdy too, but they're already operating on a much bigger scale.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link