site banner

Culture War Roundup for the week of May 15, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

9
Jump in the discussion.

No email address required.

In all seriousness, top companies had to have prepared PR teams for this scenario.

They very much haven't.

I think it is impossible to overstate just how far outside of the bounds of thought EY style doomerism has been and remains for... well, everyone except the "rationalists." It is literally impossible to talk about "AI safety" with normal human beings without them looking at you like you have two heads. The logic doesn't matter. The world runs on inductive reasoning, not deductive reasoning. Because "AI safety" has never been a problem in real life so far, it is literally impossible for normal people to understand it, much less take it seriously. If you try to explain it, you will notice that they cock their heads while they listen to you, and this is from the cognitive effort of rewriting your arguments in realtime as they hear them to be about jobs and racial bias instead of AI safety.

I am not an AI doomer. I ascribe to exactly your view with respect to Erlich and Yudkowksy, and it's well said.

But I am reporting to you, from the corporate front lines, that every single person in a position of authority has a brain defect that makes it literally impossible for them to understand the concept of "AI safety." They don't disagree with AI safety concerns; they cannot disagree with the concerns, because they cannot understand them, because when you articulate a thought about AI safety, the words completely fail to engender concepts in their brain that relate to AI safety. They cannot even understand that other people have thoughts about the concept of AI safety, except perhaps as a marketing ploy to overstate the the commercial utility of various AI-powered systems.

So the PR people have not planned a response, and the policy people have not engaged with the concept, and the executives have not been briefed, and you should expect large companies to continue acting as uncomprehending about the topic of AI safety as they would about the threat of office wall art coming to life and eating their children.

Because "AI safety" has never been a problem in real life so far,

"The Facebook algorithm accidentally ordered the genocide of the Rohingya in Burma in order to drive clicks" is sufficiently truth-adjacent that I no longer believe this.

"The Facebook algorithm accidentally ordered the genocide of the Rohingya in Burma in order to drive clicks" is sufficiently truth-adjacent that I no longer believe this.

(head cocks)

"Oh, yes, we have a huge team working on AI misinformation and AI racial bias to avoid incidents like that, that is indeed exactly what AI safety means and we are leaders in the field."