site banner

Culture War Roundup for the week of December 5, 2022

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

9
Jump in the discussion.

No email address required.

because Bay Area Omertà.

Yeah I've heard some awful shit (like death threats to AI researchers from AI safety folks) but can never actually accuse anyone because the code of silence among that group is surprisingly strong.

There are some public accusations - such as stuff against leverage research here, here, drama in comments, commentary, potentially wrong initation of curzi post. Even there there's a sense it should've been public earlier. There's also the ziz stuff.

There are also periodic struggle sessions about being safe for women - in the comments specific people are called out for misconduct.

Any of these would make for great effortposts on the marsey site!

Yeah, but it's all within the group. Whisper campaigns. Nods and winks about "we can't invite so-and-so because, well, you know what he's like". But never going outside, never any suggestion that maybe if so-and-so is such a pest, it's not safe to be around him if you're a woman or trans or whatever, that this behaviour is at the level of "get the authorities involved". Lots of internal drama and struggle sessions, but dead silence around outsiders.

That "safe for women" link - and what did this delicate blossom do, apart from clenching fists and tears streaming down her face? Write a long blog post that will only be read by insiders, instead of (if there really is a genuine problem, instead of simply "you are not validating my lived experience and so I must stamp my foot!") doing anything concrete - even just going "hump this for a game of soldiers" and walking away.

Agreed! Those posts, including Ziz's blog and general craziness, are what I was thinking of. I've also heard some recent stuff said in confidence from a couple of friends in the space. At one AI Safety workshop apparently there was an older gentleman who owned an AI research company. In the anonymous 'ideas to save the world' spreadsheet, apparently someone wrote "kill X person (owner of the AI company, in that same workshop)."

My friend reported that the young event organizers just kind of nervously laughed and said don't do it again, without really addressing it or trying to find out who was responsible. That kind of behavior with kids in/just out of college who legitimately think the world will end soon is deeply troubling to me.

anonymous 'ideas to save the world' spreadsheet

My friend reported that the young event organizers just kind of nervously laughed and said don't do it again, without really addressing it or trying to find out who was responsible.

Trying to find out who was responsible for a specific post on a designed-to-be-anonymous spreadsheet would have been a massive breech of trust.

True... but that sort of death threat, even joking, should warrant immediately shutting down the workshop and severely scolding everyone in any remotely healthy group setting. Again, especially when dealing with impressionable and radicalized young folks. I see it as only a matter of time before AI safety terrorists start doing incredibly dumb things, shooting the movement in the foot.

Zounds am I sick of "Death Threats" discourse. What named person has ever actually been killed following an anonymous death threat from some extremely online dork? If you'd looked around the AI conference and seen the scrawny pale guys there, you wouldn't have worried about the death threat either. Death threats can't be used as some trump card of oppression, they're too easy to fake and too hard to verify.

It's also actually a good conversation starter. If X's work is going to destroy the world, don't we have a responsibility to restrain him if he won't restrain himself? There's a lot of good philosophical debate to be had there, it wouldn't surprise me if X threw it in the hat himself to start discussion.

On the forum in particular and in EA discourse in general, there is a tendency to give less weight/be more critical of posts that are more emotion-heavy and less rational

Boy oh boy. It's sad to see how much effort Scott et al. need to expend on defense in the comments for every post like this, and they can just post the same one each week until people are ground down and give in. The last one was demanding Title IX inquisitions for EA, right?

Struggle sessions about being safe for women, in particular, seem more like a null hypothesis than indication of anything actually wrong. If anything, I would expect it to be counter-associated with actually being an unsafe place for women.

Oh, agree - the above is evidence of EA publicly discussing 'awful shit', as opposed to evidence that EA is bad - as the comments go into, EA puts a lot of effort into 'safe place for women'.