site banner

Culture War Roundup for the week of May 15, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

9
Jump in the discussion.

No email address required.

Other threats have come (nuclear weapons) and we've always come through them

I would actually really like to see a rebuttal of this one, because the doomer logic (which looks correct to me) implies that we should all have died decades ago in nuclear fire. Or, failing that, that we should all be dead of an engineered plague.

And yet here we are.

The threat model is different. Nuclear weapons are basically only useful for destroying things; you don't build one because a nuke makes things better for you in a vacuum, but because it prevents other people from doing bad things to you, or lets you go do things to other people. Genetic engineering capabilities don't automatically create engineered plagues, some person has to enact those capabilities in that fashion. I'm not familiar with the state of the art in GE, but I was under the impression that the knowledge required for that kind of catastrophe was wasn't quite there. Further, I think there are enough tradeoffs involved that accidents are unlikely to make outright x-risk plagues, the same way getting a rocket design wrong probably makes 'a rocket that blows up on takeoff' instead of 'a fully-functional bullet train'.

AI doom has neither of those problems. You want AI because (in theory) AIs solve problems for you, or make stuff, or let you not deal with that annoying task you hate. And, according to the doomer position, once you have a powerful enough AI, that AI's goals win automatically, with no desire for that state required on any human's part, and the default outcome of those goals does not include humans being meaningfully alive.

If nuclear bombs were also capable, by default, of being used as slow transmutation devices that gradually turned ordinary dirt into pure gold or lithium or iron whatever else you needed, and if every nuke had a very small chance per time period of converting into a device that rapidly detonated every other nuke in the world, I would be much less sanguine about our ability to have avoided the atomic bonfire.

So I have two points of confusion here. The first point of confusion is that if I take game theory seriously, I conclude that we should have seen a one-sided nuclear war in the early 1950s that resulted in a monopolar world, or, failing that, a massive nuclear exchange later that left either 1 or 0 nuclear-capable sides at the end. The second point of confusion is that it looks to me like it should be pretty easy to perform enormously damaging actions with minimal effort, particularly through the use of biological weapons. These two points of confusion map pretty closely to the doomer talking points of instrumental convergence and the vulnerable world hypothesis.

For instrumental convergence, I will shamelessly steal a paragraph from wikipedia:

Agents can acquire resources by trade or by conquest. A rational agent will, by definition, choose whatever option will maximize its implicit utility function; therefore a rational agent will trade for a subset of another agent's resources only if outright seizing the resources is too risky or costly (compared with the gains from taking all the resources), or if some other element in its utility function bars it from the seizure. In the case of a powerful, self-interested, rational superintelligence interacting with a lesser intelligence, peaceful trade (rather than unilateral seizure) seems unnecessary and suboptimal, and therefore unlikely.

This sounds reasonable, right? Well, except now we apply it to nuclear weapons, and conclude that whichever nation first obtained nuclear weapons, if it wanted to obtain the best possible outcomes for itself and its people, would have to use their nuclear capabilities in order to establish and maintain dominance, and prevent anyone else from gaining nuclear capabilities. This is not a new take. John von Neumann was famously an advocate of a "preventive war" in which the US launched a massive preemptive strike against Russia in order to establish permanent control of the world and prevent a world which contained multiple nuclear powers. To quote:

With the Russians it is not a question of whether but of when. If you say why not bomb them tomorrow, I say why not today? If you say today at 5 o'clock, I say why not one o'clock?

And yet, 70 years later, there has been no preemptive nuclear strike. The world contains at least 9 countries that have built nuclear weapons, and a handful more that either have them or could have them in short order. And I think that this world, with its collection of not-particularly-aligned-with-each-other nuclear powers, is freer, more prosperous, and even more peaceful than the one that von Neumann envisioned.

In terms of the vulnerable world hypothesis, my point of confusion is that biological weapons actually look pretty easy to make without having to do anything fancy, as far as I can tell. And in fact there was a whole thing back in 2014 with some researchers passaging a particularly deadly strain of bird flu through ferrets. The world heard about this not because there was a tribunal about bioweapon development, but because the scientists published a paper describing their methodology in great detail.

The consensus I've seen on LW and the EA forum are that an AI that is not perfectly aligned will inevitably kill us in order to prevent us from disrupting its plans, and that even if that's not the case we will kill ourselves in short order if we don't build an aligned god which will take enough control to prevent that. The arguments for both propositions do seem to me to be sound -- if I go through each point of the argument, they all seem broadly correct. And yet. I observe that, by that set of arguments, we should already be dead several times over in nuclear and biological catastrophes, and I observe that I am in fact here.

Which leads me to conclude that either we are astonishingly lucky in a way that cannot be accounted for by the anthropic principle (see my other comment), or that the LW doomer worldview has some hole in it that I have so far failed to identify.

It's not a very satisfying anti-doom argument. But it is one that I haven't seen a good rebuttal to.

That instrumental convergence paragraph comes with a number of qualifiers and exceptions which substantially limit its application to the nuclear singleton case. To wit:

Agents can acquire resources by trade or by conquest. A rational agent will, by definition, choose whatever option will maximize its implicit utility function; therefore a rational agent will trade for a subset of another agent's resources only if outright seizing the resources is too risky or costly (compared with the gains from taking all the resources), or if some other element in its utility function bars it from the seizure. In the case of a powerful, self-interested, rational superintelligence interacting with a lesser intelligence, peaceful trade (rather than unilateral seizure) seems unnecessary and suboptimal, and therefore unlikely.

I could try to draw finer distinctions between the situations of post-WW2 USA and a hypothetical superintelligent AI, but really the more important point is that the people making the decisions regarding the nukes were human, and humans trip over the "some element in its utility function bars the action" and "self-interested" segments of that text. (And, under most conceptions, the 'rational agent' part, though you could rescue that with certain views of how to model a human's utility function.)

Humans have all sorts of desires and judgements that would interfere with the selection of an otherwise game-theoretically optimal action, things like "friendship" and "moral qualms" and "anxiety". And that's not even getting into how "having a fundamental psychology shaped by natural selection in an environment where not having any other humans around ever meant probable death and certain inability to reproduce their genes" changes your utility function in a way that alters what the game-theoretic optimal actions are.

One of the major contributors to the lack of nuclear warfare we see is that generally speaking humans consider killing another human to be a moral negative, barring unusual circumstances, and this shapes the behavior of organizations composed of humans. This barrier does not exist in the case of an AI that considers causing a human's death to be as relevant as disturbing the specific arrangement of gravel in driveways.

I haven't spent enough time absorbing the vulnerable world hypothesis to have much confidence in being able to represent its proponents' arguments. If I were to respond to the bioweapon myself, it would be: what's the use case? Who wants a highly pathogenic, virulent disease, and what would they do with it? The difficulty of specifically targeting it, the likelihood of getting caught in the backwash, and the near-certainty of turning into an international pariah if/when you get caught or take credit makes it a bad fit for the goals of institutional sponsors. There are lone-wolf lunatics that end up with the goal of 'hurt as many people around me as possible with no regard for my own life or well-being' for whom a bioweapon might be a useful tool, but most paths for human psychology to get there seem to also come with a desire to go out with a blaze of glory that making a disease wouldn't satisfy. Even past that, they'd have the hurdles of figuring out and applying a bunch of stuff almost completely on their own (that paper you linked has 9 authors!) with substandard equipment, for a very delayed and uncertain payoff, when they could get it faster and more certainly by buying a couple of guns or building a bomb or just driving a truck into a crowd.

I could try to draw finer distinctions between the situations of post-WW2 USA and a hypothetical superintelligent AI, but really the more important point is that the people making the decisions regarding the nukes were human, and humans trip over the "some element in its utility function bars the action" and "self-interested" segments of that text. (And, under most conceptions, the 'rational agent' part, though you could rescue that with certain views of how to model a human's utility function.)

My point was more that humans have achieved an outcome better than the one that naive game theory says is the best outcome possible. If you observe a situation, and then come up with some math to model the situation, and then use that math to determine the provably optimal strategy for that situation, and then you look at what the actual outcomes of the situation and the actors obtain an outcome better than the one your model says is optimal, you should conclude that either the actors got very lucky or that your mathematical model does not properly model this situation.

And that's not even getting into how "having a fundamental psychology shaped by natural selection in an environment where not having any other humans around ever meant probable death and certain inability to reproduce their genes" changes your utility function in a way that alters what the game-theoretic optimal actions are.

I think you're correct that the "it would be bad if all other actors like me were dead" instinct is one of the central instincts which makes humans less inclined to use murder as a means to achieve their goals. I think another central instinct is "those who betray people who help them make bad allies, so I should certainly not pursue strategies that look like betrayal". But I don't think those instincts come from peculiarities of evolution as applied to savannah-dwelling apes. I think they are the result of evolution selecting for strategies that are generally effective in contexts where an actor has goals which can be better achieved with the help of other actors than by acting alone with no help.

And I think this captures the heart of my disagreement with Eliezer and friends -- they expect that the first AI to cross a certain threshold of intelligence will rapidly bootstrap itself to godlike intelligence without needing any external help to do so, and then with its godlike intelligence can avoid dealing with the supply chain problem that human civilization is built to solve. Since it can do that, it would have no reason to keep humans alive, and in fact keeping humans alive would represent a risk to it. As such, as soon as it established an ability to do stuff in the physical world, it would use that ability to kill any other actor that is capable of harming it (note that this is the parallel to von Neumann's "a nuclear power must prevent any other nuclear powers from arising, no matter the cost" take I referenced earlier).

And if the world does in fact look like one where the vast majority of the effort humanity puts into maintaining its supply chains is unnecessary, and actually a smart enough agent can just directly go from rocks to computer chips with self-replicating nanotech, and ALSO the world looks like one where there is some simple discoverable insight or set of insights which allows for training an AI with 3 or more orders of magnitude less compute, I think that threat model makes sense. But "self-replicating useful nanotech is easy" and "there is a massive algorithmic overhang and the curves are shaped such that the first agent to pass some of the overhang will pass all of it" are load bearing assumptions in that threat model. If either of them does not hold, we do not end up in a world where a single entity can unilaterally seize control of the future while maintaining the ability to do all the things it wants to.

TL;DR version: I observe that "attempt to unilaterally seize control of the world" has not been a winning strategy in the past, despite there being a point in the past when very smart people said it was the only possible winning path. I think that, despite the very smart people who are now asserting that it's the only possible winning path, it is still not the only possible winning path. There are worlds where it is a winning path because all paths are winning paths for that entity -- for example, worlds where a single entity is capable enough that there are no benefits for it of cooperating with others. I don't think we live in one of those worlds. In worlds where there isn't a single entity that overpowers everyone else, the game theory arguments still make sense, but also empirically doing the "not game-theoretically optimal" thing has given humanity better outcomes than doing the "game-theoretically optimal" thing, and I expect that a superintelligence would be able to do something that gave it outcomes that were at least that good.

BTW this comes down to the age-old FOOM debate. Millions of words have been written on this topic already (note that every word in that phrase is a different link to thousands-to-millions of words of debate on the topic). People who go into reading those agreeing with Yudkowsky tend to read those and think that Yudkowsky is obviously correct and his interlocutors are missing the point. People go into reading those disagreeing with Yudkowsky tend to read them and think that Yudkowsky is asserting that an unfalsifiable theory is true, and evading any questions that involve making concrete observations about what one would actually expect to see in the world. I expect that pattern would probably repeat here, so it's pretty unlikely that we'll come to a resolution that satisfies both of us here. Though I'm game to keep going for as long as you want to.

The Anthropic principle for one:

https://en.wikipedia.org/wiki/Anthropic_principle

The probability of "extinction events in the past" given "we are here to observe it" is 0%. We can't infer, therefore, anything about the chance of these events happening based on prior history.

But, you might (wisely) point out that nuclear weapons are not actually extinction events. And, so far, in humanity we have seen very limited use of nukes. This gives us weak evidence that uniquely dangerous weapons can be contained. Here's why it's not a great argument.

  1. It's an N of 1.

  2. Nukes and AI are different. The technology to create nuclear weapons can be controlled by anti-proliferation efforts. AI could be much harder to contain (short of bombing GPU clusters). Nukes also have a bounded downside. It's a very large downside but it's bounded. The technology is well understood. One nuke isn't going to destroy the world. Neither will a full nuclear exchange. However, a runaway AI could destroy the world. We have 1 megaton bombs. There is no reason to believe that 1 teraton bombs will happen anytime in the near future. However, with AI it's possible to imagine a near-term situation where the capabilities of AI increase by orders of magnitude quickly. What is N today could be 1 billion N in 10 years.

I think the anthropic principle is fine for pointing out why we don't see things with bimodal outcomes of "everything is fine" / "everyone is dead".

But nuclear and biological weapons don't look like that. If 5% of worlds have no nuclear war, 40% have one that killed half the population, and the other 55% have one that wipes out everyone, 80% of observers should be in the "half of the population died in a nuclear war" worlds.

Which means one of the following:

  1. Nuclear war will generally kill everyone in pretty short order (and thus by the anthropic principle most observers are in worlds where nuclear war has never started)

  2. We're quite lucky even taking the anthropic principle into account: most observers are in more disastrous worlds than us

  3. Nuclear war isn't actually very likely: most observers are in worlds where it never gets started

  4. Something else weird is going on (e.g. simulation hypothesis is correct).

Hypothesis 1 seems unlikely to me since the models I've seen of even a full counter-value exchange don't seem to kill more than half the people in the world. Hypothesis 3 seems like the sort of world that does not contain the Cuban missile crisis, Petrov, the Norwegian rocket incident.

Which leaves us with the conclusion that either hypothesis 2 is correct and we're just lucky in a way that is not accounted for by the anthropic principle, or our world model has a giant gaping hole in it.

I think it's probably the "giant gaping hole" one. And so any doomer explanation that also would have predicted nuclear (or biological) doom has this hole. And it's that point I would like to see the doomers engage with.