site banner

Culture War Roundup for the week of March 25, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

7
Jump in the discussion.

No email address required.

You are basically saying that humanity could not ever lose. Which contrasts with your prediction of the breakdown of society at large through human folly alone and little desire on the part of humanity for that to happen.

No, I'm not. At no point in this exchange have I said or even implied that humanity can never lose. AGI doesn't need to be superintelligent to have a non-zero chance of wiping out humanity, and possibly even a very high chance. Humans could probably wipe out humanity if they were bent on it, why couldn't a human-equivalent or even subhuman AGI do the same?

Saying "most of these possible technologies probably won't be possible even by something that is farther above humanity than humanity is to squirrels" is missing the point.

It is not evident that "something that is farther above humanity than humanity is to squirrels" is a thing that can actually exist. It's entirely possible that such a thing can't exist; intelligence might be bound by diminishing returns. It's entirely possible that such a thing could exist, but it will be beyond our reach to create for the foreseeable future.

This is the sort of argument someone from 1524 would use to explain why they doubted they could be beaten by an army from 2024. It does not matter what specific hypothetical future technologies you think are implausible. The prediction of doom does not rely on that.

The "prediction of doom" is about building an idea in the reader's mind of a problem that only has one solution. To that end, the threat is specifically and arbitrarily described to exclude all other possible solutions. It is worth noting that this is what these arguments are actually about, to take ten big steps backward and ten more to the side, and look around the edges of the picture that is being painted across the entirety of one's field of view.

The assumed disparity of the AGI versus humans is exactly what I'm pointing out has no factual basis. It is entirely theoretical, based on a chain of suppositions that may or may not actually be valid, which I have tried to summarize in the three points above. You are simply recapitulating the premise, but the premise is what I am questioning. What if the AGI isn't as smart compared to us as we are to squirrels, because intelligence scaling doesn't work that way? What if it can't bootstrap itself into super-advanced technology, because we're already pushing local maxima?

The point of these predictions and speculations is to convince the listener that AGI is a horrifying threat, and alignment is the only solution. The meta-argument is simple: "Imagine something vast and malevolent that will do horrible things to you and everyone else, which you are absolutely powerless to stop." Very well, I've imagined it. Imagining it makes it neither real nor inevitable. It being plausible is not the same as it being certain, or even likely. There is, as I've noted in a recent conversation, a crucial difference between "we can prove this is true" and "we can't prove this is false". Omnipotent AGI is firmly within the later category, but it seems to me that most AI doomers speak and act as though it is in the former. I decline to do the same.

And suppose that I'm wrong, and Malevolent AGI arrives, and wipes us all out. We will then have suffered... roughly the same fate every human before us has suffered since the advent of the species. We will have each, as individuals, lived a life and then died a death. There are some who would consider the abrupt and final termination of our species a mercy; I would strongly disagree, but the point is not a trivial one.

To use another example, it is like someone asking an expert how a chess engine will beat them at chess.

I disagree. It's like asking a chess expert whether a computer could beat a human at chess, before the invention of computers. The correct answer to that question is "I don't know."