This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
Seriously.
I've been asking blue-choosers who they think they're saving by picking blue.
That is, who is choosing blue, OTHER than the people who think they're saving someone by picking blue?
And if the only people who are picking blue are the ones trying to save someone, they are now the only ones in need of saving. They all jumped off a bridge thinking they would save someone, when there was nobody who needed saving prior to them jumping.
Its a self-fulfilling prophecy which can easily be sidestepped by choosing red.
If you can posit a person who picks blue for some innocent reason other than a desire to look like a moral person or the desire to save someone else, then you've got the beginnings of an argument.
Otherwise, you're just creating risk where no risk needed to exist.
Literally, if I were a Supervillain playing the game, I would be trying to maximize death toll by convincing some people to choose blue. I'd lie and say I was choosing blue then mercilessly defect.
"I am choosing red and you should too" provides zero reason to lie.
It's really reminding me of the saying beloved of our mothers "And if everyone jumped off a cliff, would you do that too?"
Now I'm imagining the infant Blues rebuking her with "Mother, how selfish! In order to assure the 50%+ victory over gravity where, if a sufficiency of us jump off the cliff, we will magically float safely and slowly down to solid ground, I too must and shall jump! The lemmings, Mother, the lemmings! Can Nature in its infinite wisdom, honed over millions upon millions of years of evolution, be wrong?"
More options
Context Copy link
The premise is that everybody who responds to the poll chooses based on their response. Are misclicks such an insane possibility that they haven't even occurred to you?
The thought experiment literally posits colored pills, which implies this isn't just a button on a screen, as presented.
So I'm imagining a person who has two pills in front of them, and has it explained to them what each one does. And, magically, knows for certain that these explanations are 100% truthful.
So I can not imagine someone thinking "I'm picking the red pill!" and then somehow, just completely brain farting and grabbing the blue one.
And believe me, if misclicking meant living or possibly dying, I'd be pushing that mouse around with the slowest movements possible.
I actually posit that the hypothetical, as presented, doesn't allow for the possibility of a misclick. Given the life-or-death stakes involved, if you made an accident in your click, then that's a consequence of your choice not to take precautions against a misclick by doing something like what you suggest. I'd personally zoom in/scroll to the page to the extent my non-preferred option is literally not on the screen before my mouse or finger is even over any of the options. And obviously there would have to be a decent time gap between press-down and pull-up of the finger on the mouse button or the touch screen, so that I can visually verify that the button I intended to click was indeed the one I clicked (usually you can cancel such clicks by dragging the mouse off the button before letting go).
Freak occurrences happen, I suppose, including a random bit of cosmic radiation flipping a bit on your PC to switch your choice to the other one. These seem like such unlikely and uncommon outliers that they can be effectively rounded down to zero. Otherwise, if someone misclicks, I would consider that just an active choice the person is making that they don't really care if they have to face the consequences of pressing the blue button or the red one.
Yeah, if we're allowed to control the circumstances under which the click occurs, I'd run a script which removes the blue option from the screen entirely. I would be preventing any possible avenue by which I might push the universe into the state where the 'blue' option was selected and transmitted.
I can absolutely accept some tiny tiny chance that a player screws up the choice. But if it's not quite small enough to fall out of my reasoning, it may as well be.
More options
Context Copy link
More options
Context Copy link
Yeah I guess that's true. Still:
You still have to add quite a lot to make the premise 100% work. They magically know the explanations are 100% truthful, nobody is blind/colorblind, nobody is insane or too young to truly understand the decision, and so on. Given a very generous interpretation of the premise, I think at least one person from one of the previous categories will still around.
Even if literally everybody perfectly understands the question, not everybody will choose red. Some people are just dumb. Some would rather sacrifice themselves than risk even an infinitesimal chance that they're responsible for another's death, or perhaps they'd rather sacrifice themselves than even admit the possibility of such responsibility, even if the probability is 0. Even if everybody is quite rational and understands the game theory, people have different values and/or may not decide upon the same Schelling point as everyone else.
In reality, no matter how rational everyone is, I'd be utterly shocked if everyone chose red, regardless of what the "correct" answer is. Thus the correct answer (assuming it's reasonably likely to succeed) is blue.
What % of the whole are dumb, though. Because now we're adding in irrational/random actors, which makes it even less certain that we'll meet our blue threshold because some of those will also be choosing red for dumb reasons.
I have a hard enough time modelling other rational actors in this game, now add the ones who will do things for reasons I can't even fathom!
And if we posit dumb actors, why not posit evil ones as well who are inclined to maximize death toll?
I wouldn't be utterly shocked if everyone chose red (self-interest is a hell of a drug), but I wouldn't be utterly shocked if, say 30% chose blue between those who were dumb and those who thought they were helping.
But expecting only 30% to choose blue is explicitly a reason for me to choose red.
And since the hypo doesn't present a mechanism under which you can reliably predict that the outcome for blue would be over 50%, I am pretty much going to pick the one which provides certainty.
It's a fair question, but I still think the framing is off. I'm not adding irrational actors; they're already part of the scenario as written.
Sure, I just don't think there are as many of them as there are pathological altruists, who will choose blue even when blue odds are very low.
Agreed. I like to think I would still choose blue if it came down to it, though, because (valuing my own life equal to others) I simply think it has higher EV.
Maybe? I am kind of working off the assumption that everyone who is capable of participating in the choice is able to at least understand that one choice is "100% chance of survival" even if they can't make complex moral calculations.
I grant that we can't be certain what number of people are irrational, though, which complicates the issue further.
My assumption was that everyone who responded to the poll participates, and this likely includes a few babies and imbeciles.
I mean that seems weird in that how'd they stumble across the poll in the first place?
Unless someone was spreading the poll intentionally to get them involved, which seems risky and possibly evil.
I dunno. It doesn't change my ultimate answer. If I can't know who is and how many are participating in the poll in advance, I'm REALLY not going to try to go for some galaxy-brained play that might backfire.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Not just no reason to lie but no need to trust. The cost of trusting that someone will vote blue is that if they vote red you may die if you vote in accordance with the agreement. The cost of trusting that someone will vote red red is that if they vote blue they might die if you vote in accordance with the agreement.
Lying is punished when the agreement is voting red; lying isn’t punished when the agreement is to vote blue.
Yes, the trustless aspect cannot be overstated when you're playing a game with strangers, potentially millions of them, and have no enforcement mechanism.
It's virtually guaranteed that some avowed blue-pickers will have a panic attack and go with red when the choice time is actually arrives. I suppose some red-pickers have a crisis of conscious and go blue, but holy cow if you have no other information to go on, Red is the one that doesn't require faith in strangers.
Trying to play the recursive game (I know that he knows that I know that he knows I'll pick blue, therefore...) seems like an inherently losing approach.
It just seems so obvious yet some people are arguing against it. I honestly cannot model their thought process (outside of them treating the thought exercise solely as an exercise instead of thinking — how would people actually vote if voting the wrong way could lead to their death).
The closest I've gotten is that they actually believe that "Altruism is a Schelling Point."
"I want to save people, and other people will too, so they'll accept the risk and we'll all pick blue."
But they can't fully articulate WHY they believe someone actually needs saving. They reason out why someone would pick blue based on altruism, but not why someone would pick blue a priori and thus need to be rescued. So why do we need altruism?
And on the meta level, I think they may be assuming that how people behave in this thought experiment is how they'd behave in other scenarios in which case they think reds are inherently self-interested.
But no, I'm capable of being altruistic, I can just recognize that this specific situation is one where it is best to shut off the altruistic impulse.
I genuinely WANT to understand the position that allows one to pick blue believing it to be the best action.
But it seems to require that you start with premises that are completely inborn or 'faith-based.'
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link