site banner

Dueling Lines: Will AGI Arrive Before Demographic-Induced Deglobalization?

As of late I've really lost any deep interest in any culture war issues. I still enjoy talking about them, but even the 'actually important' matters like Trump's trials and possible re-election or the latest Supreme Court cases or the roiling racial tensions of the current era seem to be sideshows at best compared to the two significant developments which stand to have greater impact than almost all other matters combined:

  1. Humanity is seemingly a hop, skip, and/or jump away from emergence of true AGI.

  2. Humanity is also locked into a demographic decline that will eventually disrupt the stable global order and world economy. No solutions tried so far have worked or even shown promise. It may be too late for such solutions to prevent the decline.

I do conserve some amount of interest for the chance that SpaceX is going to jump start industry in low earth orbit, and for longevity/anti-aging science which seems poised for some large leaps. Yet, the issues of declining human population and its downstream effect on globalization as well as the potential for human level machine intelligence seem to utterly overshadow almost any other issue we could discuss, short of World War III or the appearance of another pandemic.

And these topics are getting mainstream attention as well. There's finally space to discuss the topics of smarter-than-human AI and less-fertile-than-panda humans in less niche forums and actual news stories that start raising questions.

I recently read the Situational Awareness report by Leopold Aschenbrenner, which is a matter-of-fact update on where things absolutely seem to be heading if straight lines continue to be straight for the next few years. I find it convincing if not compelling, but the argument that we might hit AGI around 2027 (with large error bars) no longer appears absurd. This is the first time I've read a decent attempt at extrapolating out when we could actually expect to encounter the "oh shit" moment when a computer is clearly able to outperform humans not just in limited domains, but across the board.

As for the collapsed birthrates, Peter Zeihan has been the most 'level-headed' of the prognosticators here. Once again, I find it fairly convincing, but also compelling that as we end up with far too few working-age, productive citizens trying to hold up civilization as the older generations age into retirement and switch to full-time consumption. Once again you only have to believe that straight lines will keep going straight to believe that this outcome is approaching in the near future years. The full argument is more complex.

The one thing that tickles me, however, is how these two 'inevitable' results are intrinsically related! AI + robotics offers a handy method to boost productivity even as your population ages. On the negative side, only a highly wealthy, productive, educated, and globalized civilization can produce the high technology that enables current AI advances. The Aschenbrenner report up there unironically expects that 100's of millions of chips will be brought online and that global electricity production will increase by 10% before 2030ish. Anything that might interrupt chip production puts a kink in these AGI timelines. If demographic changes have as much of an impact as Zeihan suggests, it could push them back beyond the current century unless there's another route to producing all the compute and power the training runs will require.

So I find myself staring at the lines representing the increasing size of LLMs, the increasing amount of compute being deployed, the increasing funding being thrown at AI companies and chip manufacturers, and the increasing "performance" of the resultant models and then staring at the lines that represent plummeting birthrates in developed countries, and a decrease in the working age population, and thus the decrease in economic productivity that will likely result. Add on the difficulty of maintaining a peaceful, globalized economy under these constraints.

And it sure seems like the entire future of humanity hinges on which of these lines hits a particular inflection point first. And I sure as shit don't know which one it'll be.

I'd condense the premises of my position thusly:

Energy Production and High-end computer chip production are necessary inputs to achieving AGI on any timeline whatsoever. Both are extremely susceptible to demographic collapse and de-globalization. If significant deglobalization of trade occurs, there is no way any country will have the capacity to produce enough chips and energy to achieve AGI.

and

Human-level AGI that can perform any task that humans can will resolve almost any issues posed by demographic decline in terms of economic productivity and maintaining a globalized, civilized world.

Or more succinctly: If deglobalization arrives first, we won't achieve AGI. If AGI arrives first, deglobalization will be obviated.

Peter Zeihan argues that AI won't prevent the chaos. As for AGI prophets, I have rarely, in fact almost never, have seen decreasing population levels as a variable in their calculation of AI timelines.

The sense this gives me is that the AGI guys don't seem to include demographic collapse as an extant risk to AGI timelines in their model of the world. Yes they account for like interruption to chip manufacturing as a potential problem, but not accounting for this coming about due to not enough babies. And those worrying about demographic collapse discount the odds of AGI arriving in time to prevent the coming chaos.

So I find myself constantly waffling between the expectation that we'll see a new industrial revolution as AI tech creates a productivity boom (before it kills us all or whatever), and the expectation that the entire global economy will slowly tear apart at the seams and we see the return to lower tech levels out of necessity. How can I fine tune my prediction when the outcomes are so divergent in nature?

And more importantly, how can I arrange my financial bets so as to hedge against the major downsides of either outcome?


Also, yes, I'm discounting the arguments about Superintelligence altogether, and assuming that we'll have some period of time where the AI is useful and friendly before becoming far too intelligent to be controlled which lets us enjoy the benefits of the tech. I do not believe this assumption, but it is necessary for me to have any discourse about AGI at without falling on the issue of possible human extinction.

12
Jump in the discussion.

No email address required.

The Aschenbrenner thesis relies upon AI doing the work of AI researchers and recursively self-improving. That's a realm of pure software development, very alien to human intuition where machines might get more traction.

Maybe there are ways to speed up molecular modelling, maybe there's some hack that needs 10 years of uninterrupted research from a 240 IQ ubermensch to find. How could we know, we're not super-geniuses!

The Aschenbrenner thesis relies upon AI doing the work of AI researchers and recursively self-improving. That's a realm of pure software development, very alien to human intuition where machines might get more traction.

I'd agree.

Maybe there are ways to speed up molecular modelling, maybe there's some hack that needs 10 years of uninterrupted research from a 240 IQ ubermensch to find. How could we know, we're not super-geniuses!

Maybe. Maybe the massive concentration of intelligence in such a small area will lead the ASI to transcend this reality, if you want to retreat to absolute agnosticism about what ASI will be capable of. Is your position that ASI will functionally be omnipotent (or close enough that it's irrelevant to us), or that the boundaries of what it is capable of even in the near-term are unknowable but we should expect it to be capable of achieving more or less any goal?

To be clear, I don't even think it's necessarily a bad position to take when the error bars and uncertainties involved are so large. But I maintain that:

  1. People who are absolutely confident, to the point of telling others not to bother having children, that humanity is doomed due to paperclipping by omnipotent ASI.
  2. We should at least recognize that some things are hard to impossible regardless of intelligence/compute (psychohistory, precise weather prediction a year out, etc), and that there exists a gradient of difficulty for ASI as well as us. I expect that if things pan out as Aschenbrenner and Eliezer predict we'll undoubtedly be surprised in some of these areas if they discover shortcuts and workarounds that we haven't, but others have to be actual hard problems unless you're arguing in favor of omnipotence.
  3. I'm skeptical that all of these 'unhobblings' or making the leap from LLMs to godlike ASI is as trivial as many seem to expect. It certainly seems possible that the chasm could be as large as that from the earliest attempts at image classifiers in the 1960s to progress in the 2000s.

All that said, I freely admit this is far from my realm of expertise and I'll happily admit my errors if the next generation of Gemini produces an immortality pill.

People who are absolutely confident, to the point of telling others not to bother having children, that humanity is doomed due to paperclipping by omnipotent ASI.

This is not a sentence (there's no conjugated verb outside a relative clause); it's a description of a certain type of person. Did you perhaps omit something from it, such as ", are wrong" at the end?

We should at least recognize that some things are hard to impossible regardless of intelligence/compute (psychohistory, precise weather prediction a year out, etc)

No, precise weather prediction a year out is very much a compute problem. The problem is that sub-grid-scale phenomena can't be simulated fully according to physics and must be parametrised, introducing errors, and the nucleation and coalescence of cloud droplets - which is key to tropospheric dynamics - is at a micron or hundreds-of-nanometres scale so having a grid that small is computationally infeasible. I think you need something like 20-30 more OOMs of compute compared to what we currently use, but the point is that there exists a finite threshold at which your weather forecasts suddenly become exceptionally accurate (though you do, of course, also need quite precise data to put into it).

This is not a sentence (there's no conjugated verb outside a relative clause); it's a description of a certain type of person. Did you perhaps omit something from it, such as ", are wrong" at the end?

Yes, my apologies. Not necessarily 'are wrong' on the object level, but wrong in their overconfidence.

No, precise weather prediction a year out is very much a compute problem...I think you need something like 20-30 more OOMs of compute compared to what we currently use, but the point is that there exists a finite threshold at which your weather forecasts suddenly become exceptionally accurate (though you do, of course, also need quite precise data to put into it).

Sure, but from that perspective, psychohistory and economics are also very much compute problems, no? You just have to be able to accurately simulate every atom in every person involved. Chess is a compute problem too. But waving your hands and saying ASI will be able to fab a bunch of killbot drones (presumably already possible) but also be capable of precise weather prediction years out elides the fact that the latter requires an increase in our computational resources of 20-30 OOMs. Unless you're proposing such a hard takeoff that your ASI is virtually omnipotent overnight and capable of conjuring arbitrary amounts of compute at will, shouldn't there be a distinction between 'hard' and 'easy' problems? Barring shortcuts that decrease the computational difficulty of a given problem by many OOMs, but again, presumably those don't exist in every case, right? And even discovering those shortcuts could be 'hard' problems in and of themselves, dependent on other advances.

Yes, my apologies. Not necessarily 'are wrong' on the object level, but wrong in their overconfidence.

Thanks for the clarification. Yes, I certainly agree they're wrong in their overconfidence; you need all three of "metastable Butlerian Jihad can't happen even considering warning shots or WWIII interrupt", "NNAGI can't be aligned" and "unaligned AGI can definitely destroy all humans", and the first two are beyond anyone's ability to predict with the 99.9%+ confidence you need for "don't even bother trying long-term things".

(My P(AGI Doom) is under 50%, because while I consider the third practically proven and the second extremely likely, I think Butlerian Jihad is actually pretty likely - AI that probably can't be aligned is not a Prisoner's Dilemma but rather a Stag Hunt or even Concord, and neural nets seem like they could plausibly be non-RSI enough to allow for warning shots.)

Sure, but from that perspective, psychohistory and economics are also very much compute problems, no? You just have to be able to accurately simulate every atom in every person involved.

Technically, no, there are other limits in play there. Simulating every atom runs into quantum limits on determinism, and there's the halting-problem/Godel-incompleteness issue where you're trying to predict a system that can potentially condition on your predictor's output (if you ask a predictor to predict whether I'll choose X or Y, but I can see the prediction before choosing, I can perform the strategy of "choose X if predictor says Y; choose Y if predictor says X" and thus thwart the predictor). Technically, both of those do apply to "predict the weather with perfect accuracy" (the latter because your computer itself and/or people acting on your forecast generate heat, CO2, etc.), but AIUI you could probably get to a year without the perturbations from those wrecking everything.

(Also, even if you handwave the quantum problem, there are a lot more OOMs involved in atom-perfect simulation, enough such that even at theoretical limits you're talking about a computer bigger than the system being simulated.)

But waving your hands and saying ASI will be able to fab a bunch of killbot drones (presumably already possible) but also be capable of precise weather prediction years out elides the fact that the latter requires an increase in our computational resources of 20-30 OOMs.

No, it requires an increase in the computational resources allocated to the problem of 20-30 OOMs; the difference is that AIUI not very much of current compute is used for weather prediction (because MOAR COMPUTE is pretty marginal in effectiveness until you get to the point of not having to parametrise coalescence, and within certain bounds the weather's not all that important), so the hardware for a few of those OOMs already actually exists.

That's all really interesting! Thanks a lot for the explanation, I appreciate it.

I'm not advocating omniscience or omnipotence but I reckon the power-scaling goes very high. In sci-fi, Time Lords >>> Xeelee >>> Star Wars > Mass Effect.

All of them could stomp us without much effort. AI doesn't need to be omnipotent to subjugate or destroy humanity, it only needs to be significantly stronger. Even if it can't predict a chaotic system like weather, it probably could control the large-scale outcomes by cloudseeding or manipulating reflection/absorption of heat. Even if it can't predict the economy, it could hack stock exchanges or create outcomes directly by founding new companies/distributing powerful technology.

I too agree that it's no good giving up before all the cards are revealed. But I don't think that pushing back against omnipotent AI is beneficial. It doesn't matter if the AI doesn't know how to achieve FTL travel, or if it can't reach the fundamental limits of energy/compute with some black-hole plasma abomination. That's not needed to beat us.

From our point of view, superintelligence will be unbeatable, it may as well be omnipotent. I am highly confident that we won't be able to detect its presence or malignity until it's far too late. Once we do, we won't be able to act quickly enough to deal with it or even communicate securely. If we even get to fighting, I expect it to run rings around our forces tactically and strategically. Maybe that's nanomachines or mosquito-robot swarms or more conventional killbots and nukes. Maybe it's over in hours, maybe it takes a slow and steady path, acting via proxies and deceit to cautiously advance its goals.