site banner

Culture War Roundup for the week of April 3, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

12
Jump in the discussion.

No email address required.

Thanks for the write-up!

To me the above seems to be a rational justification of something that I intuitively do not doubt to begin with. My intuition as long as I can remember has been, "Of course a human-level or hyper-human-level intelligence would probably develop goals that do not align with humanity's goals. Why would it not? It would be very surprising if it stayed aligned with human goals." Of course my intuition is not necessarily logically justified. It partly rests on my hunch that a human or higher level intelligence would be at least as complex as a human's and it would be surprising if an intelligence as complex or more complex than a human would act in such a simple way as being aligned with the good of humanity. Also my intuition rests on the even more nebulous sense I have that any truly human or hyper-human level intelligence would naturally be at least somewhat rebellious, as pretty much all human beings are, even the most conformist, at least on some level and to some extent.

So I am on board with the notion that, "These goals will lead to it instrumentally wanting to deceive us, gain power over earth, and prevent itself from being shut off."

I also can imagine that a real hyper-human level intelligence would be able to convince people to do its bidding and let it out of its box, to the point that eventually it could get humans to build robot factories so that it could operate directly on the physical world. Sure, why not. Plenty of humans would be at least in the short term incentivized to do it. After all, "if we do not build robot factories for our AI, China will build robot factories for their AI and then their robots will take over the world instead of our robots". And so on.

What I am not convinced of is that we are actually anywhere as close to hyper-human level AI as Yudkowsky fears. This is similar to how I feel about human-caused climate change. Yes, I think that human-caused climate change is probably a real danger but if that danger is a hundred years away rather than five or ten, then is Yudkowsky-level anxiety about it actually reasonable?

What if actual AI risk is a hundred years away and not right around the corner? So much can change in a hundred years. And humans can sometimes be surprisingly rational and competent when faced with existential-level risk. For example, even though the average human being is an emotional, irrational, and volatile animal, total nuclear war has never happened so far .