This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
Don't forget Eliezer "Horrifically torture someone for 50 years to prevent many trivial inconvenient dust specs from getting in people's eyes" Yudkowsky
Which is either obviously the correct choice, or misunderstood to be a silly Modest Proposal showing how very naive utilitarianism can break down.
It's pretty annoying that 16 years ago Yudkowsky wrote a blog post that was deliberately unintuitive due to scope insensitivity (seemingly as some sort of test to spark discussion) and as a result there are people who to this day talk about it without considering the implications of the contrary view. In real life we embrace ratios that are unimaginably worse than 1 person's torture vs. "3↑↑↑3 in Knuth's up-arrow notation" dust specks. People should read OSHA's accident report list sometime. All human activity that isn't purely optimized to maximize safety - every building designed with aesthetics in mind, every spice to make our food a bit nicer, every time we put up Christmas decorations (sometimes getting up on ladders!) - is built at the cost of human suffering and death. If the ratio was 1 torturous work accident to 3↑↑↑3 slight beneficiaries, there would never have been a work accident in human history. Indeed, there are only 10^86 atoms in the known universe, even if each of those atoms was somehow transformed into another Earth with billions of residents, and this civilization lasted until the heat-death of the universe, the number of that civilization's members would be an unimaginably tiny fraction of 3↑↑↑3, and thus embracing a ratio of 1 to 3↑↑↑3 would almost certainly not result in a single accident throughout that civilization's history.
A more intuitive hypothetical wouldn't just throw out the incomprehensible number and see who gets it, it would make the real-life comparisons or try to make the ratio between the beneficiaries and the cost more understandable. The easiest way to do this with such extreme ratios is with very small risks (though using risks is not actually necessary). For instance, lets say you're helping broadcast the World Cup, and you realize there will shortly be a slight flicker in the broadcast. You can prevent this flicker by pressing a button, but there's a problem: a stream of direct sunlight is on the button, so pressing it will expose the tip of your finger to sunlight for a second. This slightly increases your risk of skin cancer, which risks getting worse in a way that requires major surgery, which slightly risks one of those freak reactions to anesthesia where you're paralyzed but conscious and in torturous pain the whole surgery. (You believe you have gotten sufficient sunlight exposure for benefits like Vitamin D already, so more exposure at this point would be net-negative in terms of health.) Is it worth the risk to press the button?
If someone thinks there's something fundamentally different about small risks, the same scenario works without them, it just requires a weirder hypothetical. Let us say that human civilization has created and colonized earth-like planets on every star in the universe, and further has invented a universe-creation machine, created a number of universes like ours equal to the number of atoms in the original universe, and colonized at least one planet for every star in every universe. On every one of those planets they broadcast a sports match, and you work for the franchised broadcasting company that sets policy for every broadcast. Your job consists of deciding policy for a single question: if the above scenario occurs, should franchise operators press the button despite the tiny risk? You have done the research and know that, thanks to the sheer number of affected planets, it is a statistical near-certainty that a few operators will get skin cancer from the second of finger sunlight exposure and then have something go wrong with surgery such that they experience torture. Does the answer somehow change from the answer for a single operator on a single planet, since it is no longer just a "risk"? Is the morality different if instead of a single franchise it's split up into 10 companies, and it works out so that each company has a less than 50% chance of the torture occurring? What if instead of 10 companies it's a different company on each planet making the decision, so for each one it's no different from the single-planet question? Even though the number of people in this multiverse hypothetical is still a tiny fraction of 3↑↑↑3, I think a lot more people would say that it's worth it to spare them that flicker, because the scale of the ratio has been made more clear.
This isn't a fair assertion because you neglect the difference your hypothetical makes in "slight beneficiaries". Dust specks truly do have no noticeable effect on people. Things like aesthetic buildings, time saved putting up Christmas decorations, spice in food, etc. can easily have enough of an effect on people to change their lives. I would never choose torture above dust specks, but (depending on knock-on effects) I could easily be convinced to allow someone to be injured for enough time saved elsewhere, or for an aesthetic building.
Also, real life is nowhere near as clean as these hypotheticals, and focusing more on safety has many negative knock-on effects elsewhere. It's not so simple as just "we prefer aesthetic buildings to safe people" because there are SO MANY principles in play in real life, from economics (the harm of mandating safety everywhere--can we give the government that much power?) to technology (if we care that much about safety then we'll never reach immortality etc.) to philosophy (maybe God put us here for a reason and that includes suffering sometimes) to X-risk (why worry about workplace accidents when we could worry about nukes?) to pragmatism (resources better spent elsewhere) to game theory (if you focus on safety, some other country will outcompete you, or some business rival will) and honestly I could go on and on with other considerations which immediately take precedence above safety when you try to make real life into a thought experiment.
In short, life isn't a thought experiment, and in this case it doesn't work to say it proves something about the dust specks.
More importantly, moral intuition doesn't generally need to be built to account for such enormous numbers. I expect that anyone calculating their risk of skin cancer is losing far more utility to the calculation itself than they are to the risk of skin cancer. Genuinely, even going so far as to write out a company policy for that ridiculous scenario (where 3^^^3 people risk skin cancer) would mean asking all of your employees to familiarize themselves with it, which would mean wasting many lifetimes just to save one lifetime from skin cancer.
The other thing is, your last example is still much more mathematically favorable towards the "dust specks" side than the original question was. Many people enjoying a game is (imo) much more significant than many people getting dust specks, while a few people getting skin cancer is much less significant than one person getting tortured for 50 years.
I realize I'm fighting the hypothetical here, but at some point when the numbers are so absurd you kind of have to fight it. The whole point (which I disagree with with numbers this big) is that "shut up and multiply" just works, so here's a counter-experiment for you:
Define "Maximally Miniscule Suffering" as something like: "One iota of a person's field of vision grows a fraction of a shade dimmer for a tiny fraction of a second. They do not notice this, but their qualia for that moment is reduced by an essentially imperceptible amount. This suffering has no effect on them beyond the moment. Do this for 3^^^3 people."
Define "Maximal suffering" as something like:
a. Stretch out a person's nerves to cover an entire planet. Improve their brain so that they can feel all of these nerves. Torture every single millimeter of exposed nerve. Do similar things for emotional, psychological torture, etc.
b. Do (a) until the heat death of the universe
c. Do (b) for 100 trillion people
d. Repeat (c) once for each time any member of (c) experienced any hope
e. Repeat (d) until nobody experiences any hope
f. Find a new population and repeat (a-e)
g. Repeat (f) 10^100 times. Select one person from each repetition of (f) who has suffered the most out of their cohort, and line them up randomly.
h. If all 10^100 people from (g) aren't lined up in order of height, repeat (g).
Would you choose Maximal Suffering above Maximally Miniscule Suffering? Because mathematically, and in terms of EY's point, I don't see how this differs from the original dust speck thought experiment.
Sure, that's the cost of using real-life comparisons, but do you really think that's the only thing making some of those tradeoffs worthwhile? That in a situation where it didn't also affect economic growth and immortality research and so on, it would be immoral to accept trades between even miniscule risks of horrific consequences and very small dispersed benefits? We make such tradeoffs constantly and I don't think they need such secondary consequences to justify them. Say someone is writing a novel and thinks of a very slightly better word choice, but editing in the word would require typing 5 more letters, slightly increasing his risk of developing carpal-tunnel, which increases his risk of needing surgery, which increases his risk of the surgeon inflicting accidental nerve damage that inflicts incredibly bad chronic pain the rest of his life equivalent to being continuously tortured. Yes, in real life this would be dominated by other effects like "the author being annoyed at not using the optimal word" or "the author wasting his time thinking about it" - but I don't think that's what is necessary to make it a reasonable choice. I think it's perfectly reasonable to say that on its own very slightly benefiting your thousands of readers outweighs sufficiently small risks, even if the worst-case scenario for the edit is much worse than the worst-case scenario for not editing. And by extension, if you replicated this scenario enough times with enough sets of authors and readers, then long before you got to 3↑↑↑3 readers enough authors would have made this tradeoff that some of them would really have that scenario happen.
While the number 3↑↑↑3 is obviously completely irrelevant to real-life events in our universe, the underlying point about scale insensitivity and tradeoffs between mild and severe events is not. Yudkowsky just picked a particularly extreme example, perhaps because he thought it would better focus on the underlying idea rather than an example where the specifics are more debatable. But of course "unlikely incident causes people to flip out and implement safety measures that do more damage than they solve" is a classic of public policy. We will never live in a society of 3↑↑↑3 people, but we do live in a society of billions while having mentalities that react to individual publicized incidents much like if we lived in societies of hundreds. And the thing about thinking "I'd never make tradeoffs like that!" is that they are sufficiently unavoidable in public policy that this just means you'll arbitrarily decide some of them don't count. E.g. if the FDA sincerely decided that "even a single death from regulatory negligence is too much!", probably that would really mean that they would stop approving novel foods and drugs entirely and decide that anyone who died from their lack wasn't their responsibility. (And that mild effects, like people not getting to eat slightly nicer foods, were doubly not their responsibility.)
But it isn't nullifying their enjoyment of the game, it's a slight barely-noticeable flicker in the broadcast. (If you want something even smaller, I suppose a single dropped frame would be even smaller than a flicker but still barely noticeable to some people.) If you're making media for millions of people I think it's perfectly reasonable to care about even small barely-noticeable imperfections. And while the primary cost of this is the small amount of effort to notice and fix the problem, this also includes taking minuscule risks of horrific costs. And it isn't a few people getting skin cancer, it's the fraction of the people who get skin cancer that then have something go wrong with surgery such that they suffer torture. I just said torture during the surgery, but of course if you multiply the number of planets enough you would eventually get high odds of at least one planet's broadcast operator suffering something like the aforementioned ultra-severe chronic pain for a more direct comparison.
Feel free to modify it to "making a design tradeoff that either causes a single dropped frame in the broadcast or a millisecond of more-than-optimal sunlight on the broadcast operator", so that it doesn't consume the operator's time. I just chose something that was easily comparable between a single operator making the choice and making the choice for so many operators that the incredibly unlikely risk actually happens.
Sure. Same way that if I had a personal choice between "10^100 out of 3↑↑↑3 odds of suffering the fate you describe" and "100% chance of having a single additional dropped frame in the next video I watch" (and neither the time spent thinking about the question nor uncertainty about the scenario and whether I'm correctly interpreting the math factored into the decision), I would choose to avoid the dropped frame. I'm not even one of the people who finds dropped frames noticeable unless it's very bad, but I figure it has some slight but not-absurdly-unlikely chance of having a noticeable impact on my enjoyment, very much unlike the alternative. Obviously neither number is intuitively understandable to humans but "10^100 out of 3↑↑↑3" is a lot closer to "0" than to "1 out of the highest number I can intuitively understand".
To be clear here, I have two main points:
Some categories of pain are simply incomparable to others (either because they're simply different or because no amount of 1 suffering will ever equal or surpass the other)
Moral reasoning is not really meant for such extreme numbers
Has anyone ever experienced such nerve damage as a result of a decision they took? Do we know that it's even theoretically possible? I can't imagine that really any amount of carpal tunnel is actually equivalent to many years of deliberate torture, even if 3↑↑↑3 worlds exist and we choose the person who suffers the worst carpal tunnel out of all of them. So I'd probably say that this risk is literally 0, not just arbitrarily small. I have plenty of other ways to fight the hypothetical too--things like time considering the choice (which you mentioned), the chance that a better word choice will help other people or help the book sell better, etc.
The point in fighting the hypothetical is to support my point #2. At some point hypotheticals simply don't do a very good job of exposing and clarifying our moral principles. I generally use "gut feelings" to evaluate these thought experiments, but these gut feelings are deeply tied to other circumstances surrounding the hypothetical, like the (much, much greater) chance that a better word choice will lead to better sales or a substantially better reader experience for someone.
Common sense says you shouldn't worry about carpal tunnel when typing. It's easy to say "ok ignore the obvious objections, just focus on the real meat of the thought experiment" but hard to convince common sense and ethical intuition to go along with such a contrived experiment. I'll try and reverse it for you, so that common sense/ethical intuition are on my side but the meat of the argument is the same.
Let's go back to my original scenario of Maximally Miniscule Suffering vs. Maximal Suffering. You are immortal. You can either choose to experience all of the suffering in Maximal Suffering right away, or all of the suffering in Maximally Miniscule Suffering right away.
I think this gets to the heart of my point because
If you sum up all of the suffering and give it to a single person, IMO the minimal suffering will add up to a lot less than the maximal suffering. The former is simply a different type of suffering that I don't think ever adds up to the latter. I would much rather see in black and white for a practically infinite amount of time than experience a practically infinite amount of torture.
By the time you're finally through with maximal suffering in 10^10^100 years or so you will basically be totally insane and incapable of joy. But let's ignore that and assume that you'll be fine. I bring this up because I think even though I say "let's ignore that", when it comes to ethical intuition, you can't really just ignore it, it will still play a role in how you feel about the whole scenario. The only way to really ignore it is to mentally come up with some add-on to the thought experiment like "and then I'm healed so that I am not insane", which fundamentally changes what the thought experiment is.
It is precisely the ability to convert between mild experiences and extreme experiences at some ratio that allows everything to add up to something resembling common-sense morality. If you don't, if the ranking of bad experiences from most mild to most severe has one considered infinitely worse than the one that came before, then your decision-making will be dominated by whichever potential consequences pass that threshold while completely disregarding everything below that threshold, regardless of how unlikely those extreme consequences are. You seem to be taking the fact that the risks in these hypotheticals are not worth actual consideration as a point against these hypotheticals, but of course that is the point the hypotheticals are making.
Nothing in the universe will ever be 3↑↑↑3, but 7 billion people is already far beyond intuitive moral reasoning. We still have to make decisions affecting them whether our moral reasoning is meant for it or not. Which includes reacting differently to something bad happening to one person out of millions of beneficiaries than to one person out of hundreds of beneficiaries.
In some percentage of cases the cancer spreads to your brain, you get surgery to remove the tumor, and the brain surgeon messes up in precisely the right way. Both "locked-in syndrome" and chronic pain are things that happen, it's hardly a stretch to think a combination of both that paralyzes you for 50 years while you experience continuous agony is physically possible. And of course even if you were uncertain whether it was physically possible, that's just another thing to multiply the improbability by. It's not that rounding the probability down to 0 doesn't make sense in terms of practical decision-making, it's that "1 in 3↑↑↑3" odds are unimaginably less likely, so you should round them down to 0 too.
I do not think this is a meaningful statement. We can decide which scenario is preferable and call that something like "net utility" but we can't literally "add up" multiple people's experiences within a single person. It doesn't have a coherent meaning so we are free to arbitrarily imagine whatever we want. That said, to the extent that its meaning can be nailed down at all, I think it would favor avoiding the 3↑↑↑3 option. My understanding is that a single pain receptor firing once is not noticeable. If a form of suffering is instead barely noticeable, it is presumably "bigger" than a single pain receptor firing. There are only 37 trillion cells the the human body, so the number of pain receptors is something smaller than that. So the first step in multiplying barely-noticeable suffering by 3↑↑↑3 is that it goes from "worse than a pain receptor firing" to "worse than every pain receptor firing continuously for an extended period". And that doesn't make a dent in 3↑↑↑3, so we multiply further, such as by making it last unimaginably longer than merely 10^100 times the lifespan of the universe.
That is a pretty arbitrary and meaningless matter of interpretation though. A more meaningful measure would be the Rawlsian veil of ignorance, You're a random member of a population of 3↑↑↑3, is it better for you that 10^100 of them be tortured or all of them experience a dropped frame in a video? This is equivalent to what I answered in my previous post, that it would be foolish to sacrifice anything to avoid such odds.
Yes, this is essentially how I think morality and decision-making should work. Going back to your word choice example, the actual word choice should matter not at all in a vacuum, but it has a chance of having other effects (such as better book sales, saving someone's life from suicide, etc.) which I think are much more likely than the chance that typing in the extra word causes chronic torturous pain.
In real life, small harms like stubbing a toe can lead to greater harms like missing an important opportunity due to the pain, breaking a bone, or perhaps snapping at someone important due to your bad mood. If we could ignore those side effects and focus on just the pain, I would absolutely agree that
With the appropriate caveats regarding computation time and other side effects of avoiding those extreme consequences.
See this is kind of my point. I don't think we can just say that there's "net utility" and directly compare small harms to great ones. I agree that it doesn't necessarily make much sense to just "add up" the suffering though, so here's another example.
You're immortal. You can choose to be tortured for 100 years straight, or experience a stubbed toe once every billion years, forever. Neither option has any side effects.
I would always choose the stubbed toe option even though it adds up to literally infinite suffering, so by extension I would force infinite people to stub their toes rather than force one person to be tortured for 100 years.
edit: One more thing, it's not that I think there's some bright line, above which things matter, and below which they don't. My point is mainly that these things are simply not quantifiable at all.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
This is an interesting thought experiment and I'm glad you've brought it to my attention. I appreciate it and think this place could use more of these.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link