site banner

Dueling Lines: Will AGI Arrive Before Demographic-Induced Deglobalization?

As of late I've really lost any deep interest in any culture war issues. I still enjoy talking about them, but even the 'actually important' matters like Trump's trials and possible re-election or the latest Supreme Court cases or the roiling racial tensions of the current era seem to be sideshows at best compared to the two significant developments which stand to have greater impact than almost all other matters combined:

  1. Humanity is seemingly a hop, skip, and/or jump away from emergence of true AGI.

  2. Humanity is also locked into a demographic decline that will eventually disrupt the stable global order and world economy. No solutions tried so far have worked or even shown promise. It may be too late for such solutions to prevent the decline.

I do conserve some amount of interest for the chance that SpaceX is going to jump start industry in low earth orbit, and for longevity/anti-aging science which seems poised for some large leaps. Yet, the issues of declining human population and its downstream effect on globalization as well as the potential for human level machine intelligence seem to utterly overshadow almost any other issue we could discuss, short of World War III or the appearance of another pandemic.

And these topics are getting mainstream attention as well. There's finally space to discuss the topics of smarter-than-human AI and less-fertile-than-panda humans in less niche forums and actual news stories that start raising questions.

I recently read the Situational Awareness report by Leopold Aschenbrenner, which is a matter-of-fact update on where things absolutely seem to be heading if straight lines continue to be straight for the next few years. I find it convincing if not compelling, but the argument that we might hit AGI around 2027 (with large error bars) no longer appears absurd. This is the first time I've read a decent attempt at extrapolating out when we could actually expect to encounter the "oh shit" moment when a computer is clearly able to outperform humans not just in limited domains, but across the board.

As for the collapsed birthrates, Peter Zeihan has been the most 'level-headed' of the prognosticators here. Once again, I find it fairly convincing, but also compelling that as we end up with far too few working-age, productive citizens trying to hold up civilization as the older generations age into retirement and switch to full-time consumption. Once again you only have to believe that straight lines will keep going straight to believe that this outcome is approaching in the near future years. The full argument is more complex.

The one thing that tickles me, however, is how these two 'inevitable' results are intrinsically related! AI + robotics offers a handy method to boost productivity even as your population ages. On the negative side, only a highly wealthy, productive, educated, and globalized civilization can produce the high technology that enables current AI advances. The Aschenbrenner report up there unironically expects that 100's of millions of chips will be brought online and that global electricity production will increase by 10% before 2030ish. Anything that might interrupt chip production puts a kink in these AGI timelines. If demographic changes have as much of an impact as Zeihan suggests, it could push them back beyond the current century unless there's another route to producing all the compute and power the training runs will require.

So I find myself staring at the lines representing the increasing size of LLMs, the increasing amount of compute being deployed, the increasing funding being thrown at AI companies and chip manufacturers, and the increasing "performance" of the resultant models and then staring at the lines that represent plummeting birthrates in developed countries, and a decrease in the working age population, and thus the decrease in economic productivity that will likely result. Add on the difficulty of maintaining a peaceful, globalized economy under these constraints.

And it sure seems like the entire future of humanity hinges on which of these lines hits a particular inflection point first. And I sure as shit don't know which one it'll be.

I'd condense the premises of my position thusly:

Energy Production and High-end computer chip production are necessary inputs to achieving AGI on any timeline whatsoever. Both are extremely susceptible to demographic collapse and de-globalization. If significant deglobalization of trade occurs, there is no way any country will have the capacity to produce enough chips and energy to achieve AGI.

and

Human-level AGI that can perform any task that humans can will resolve almost any issues posed by demographic decline in terms of economic productivity and maintaining a globalized, civilized world.

Or more succinctly: If deglobalization arrives first, we won't achieve AGI. If AGI arrives first, deglobalization will be obviated.

Peter Zeihan argues that AI won't prevent the chaos. As for AGI prophets, I have rarely, in fact almost never, have seen decreasing population levels as a variable in their calculation of AI timelines.

The sense this gives me is that the AGI guys don't seem to include demographic collapse as an extant risk to AGI timelines in their model of the world. Yes they account for like interruption to chip manufacturing as a potential problem, but not accounting for this coming about due to not enough babies. And those worrying about demographic collapse discount the odds of AGI arriving in time to prevent the coming chaos.

So I find myself constantly waffling between the expectation that we'll see a new industrial revolution as AI tech creates a productivity boom (before it kills us all or whatever), and the expectation that the entire global economy will slowly tear apart at the seams and we see the return to lower tech levels out of necessity. How can I fine tune my prediction when the outcomes are so divergent in nature?

And more importantly, how can I arrange my financial bets so as to hedge against the major downsides of either outcome?


Also, yes, I'm discounting the arguments about Superintelligence altogether, and assuming that we'll have some period of time where the AI is useful and friendly before becoming far too intelligent to be controlled which lets us enjoy the benefits of the tech. I do not believe this assumption, but it is necessary for me to have any discourse about AGI at without falling on the issue of possible human extinction.

12
Jump in the discussion.

No email address required.

This is not a sentence (there's no conjugated verb outside a relative clause); it's a description of a certain type of person. Did you perhaps omit something from it, such as ", are wrong" at the end?

Yes, my apologies. Not necessarily 'are wrong' on the object level, but wrong in their overconfidence.

No, precise weather prediction a year out is very much a compute problem...I think you need something like 20-30 more OOMs of compute compared to what we currently use, but the point is that there exists a finite threshold at which your weather forecasts suddenly become exceptionally accurate (though you do, of course, also need quite precise data to put into it).

Sure, but from that perspective, psychohistory and economics are also very much compute problems, no? You just have to be able to accurately simulate every atom in every person involved. Chess is a compute problem too. But waving your hands and saying ASI will be able to fab a bunch of killbot drones (presumably already possible) but also be capable of precise weather prediction years out elides the fact that the latter requires an increase in our computational resources of 20-30 OOMs. Unless you're proposing such a hard takeoff that your ASI is virtually omnipotent overnight and capable of conjuring arbitrary amounts of compute at will, shouldn't there be a distinction between 'hard' and 'easy' problems? Barring shortcuts that decrease the computational difficulty of a given problem by many OOMs, but again, presumably those don't exist in every case, right? And even discovering those shortcuts could be 'hard' problems in and of themselves, dependent on other advances.

Yes, my apologies. Not necessarily 'are wrong' on the object level, but wrong in their overconfidence.

Thanks for the clarification. Yes, I certainly agree they're wrong in their overconfidence; you need all three of "metastable Butlerian Jihad can't happen even considering warning shots or WWIII interrupt", "NNAGI can't be aligned" and "unaligned AGI can definitely destroy all humans", and the first two are beyond anyone's ability to predict with the 99.9%+ confidence you need for "don't even bother trying long-term things".

(My P(AGI Doom) is under 50%, because while I consider the third practically proven and the second extremely likely, I think Butlerian Jihad is actually pretty likely - AI that probably can't be aligned is not a Prisoner's Dilemma but rather a Stag Hunt or even Concord, and neural nets seem like they could plausibly be non-RSI enough to allow for warning shots.)

Sure, but from that perspective, psychohistory and economics are also very much compute problems, no? You just have to be able to accurately simulate every atom in every person involved.

Technically, no, there are other limits in play there. Simulating every atom runs into quantum limits on determinism, and there's the halting-problem/Godel-incompleteness issue where you're trying to predict a system that can potentially condition on your predictor's output (if you ask a predictor to predict whether I'll choose X or Y, but I can see the prediction before choosing, I can perform the strategy of "choose X if predictor says Y; choose Y if predictor says X" and thus thwart the predictor). Technically, both of those do apply to "predict the weather with perfect accuracy" (the latter because your computer itself and/or people acting on your forecast generate heat, CO2, etc.), but AIUI you could probably get to a year without the perturbations from those wrecking everything.

(Also, even if you handwave the quantum problem, there are a lot more OOMs involved in atom-perfect simulation, enough such that even at theoretical limits you're talking about a computer bigger than the system being simulated.)

But waving your hands and saying ASI will be able to fab a bunch of killbot drones (presumably already possible) but also be capable of precise weather prediction years out elides the fact that the latter requires an increase in our computational resources of 20-30 OOMs.

No, it requires an increase in the computational resources allocated to the problem of 20-30 OOMs; the difference is that AIUI not very much of current compute is used for weather prediction (because MOAR COMPUTE is pretty marginal in effectiveness until you get to the point of not having to parametrise coalescence, and within certain bounds the weather's not all that important), so the hardware for a few of those OOMs already actually exists.

That's all really interesting! Thanks a lot for the explanation, I appreciate it.