site banner

Dueling Lines: Will AGI Arrive Before Demographic-Induced Deglobalization?

As of late I've really lost any deep interest in any culture war issues. I still enjoy talking about them, but even the 'actually important' matters like Trump's trials and possible re-election or the latest Supreme Court cases or the roiling racial tensions of the current era seem to be sideshows at best compared to the two significant developments which stand to have greater impact than almost all other matters combined:

  1. Humanity is seemingly a hop, skip, and/or jump away from emergence of true AGI.

  2. Humanity is also locked into a demographic decline that will eventually disrupt the stable global order and world economy. No solutions tried so far have worked or even shown promise. It may be too late for such solutions to prevent the decline.

I do conserve some amount of interest for the chance that SpaceX is going to jump start industry in low earth orbit, and for longevity/anti-aging science which seems poised for some large leaps. Yet, the issues of declining human population and its downstream effect on globalization as well as the potential for human level machine intelligence seem to utterly overshadow almost any other issue we could discuss, short of World War III or the appearance of another pandemic.

And these topics are getting mainstream attention as well. There's finally space to discuss the topics of smarter-than-human AI and less-fertile-than-panda humans in less niche forums and actual news stories that start raising questions.

I recently read the Situational Awareness report by Leopold Aschenbrenner, which is a matter-of-fact update on where things absolutely seem to be heading if straight lines continue to be straight for the next few years. I find it convincing if not compelling, but the argument that we might hit AGI around 2027 (with large error bars) no longer appears absurd. This is the first time I've read a decent attempt at extrapolating out when we could actually expect to encounter the "oh shit" moment when a computer is clearly able to outperform humans not just in limited domains, but across the board.

As for the collapsed birthrates, Peter Zeihan has been the most 'level-headed' of the prognosticators here. Once again, I find it fairly convincing, but also compelling that as we end up with far too few working-age, productive citizens trying to hold up civilization as the older generations age into retirement and switch to full-time consumption. Once again you only have to believe that straight lines will keep going straight to believe that this outcome is approaching in the near future years. The full argument is more complex.

The one thing that tickles me, however, is how these two 'inevitable' results are intrinsically related! AI + robotics offers a handy method to boost productivity even as your population ages. On the negative side, only a highly wealthy, productive, educated, and globalized civilization can produce the high technology that enables current AI advances. The Aschenbrenner report up there unironically expects that 100's of millions of chips will be brought online and that global electricity production will increase by 10% before 2030ish. Anything that might interrupt chip production puts a kink in these AGI timelines. If demographic changes have as much of an impact as Zeihan suggests, it could push them back beyond the current century unless there's another route to producing all the compute and power the training runs will require.

So I find myself staring at the lines representing the increasing size of LLMs, the increasing amount of compute being deployed, the increasing funding being thrown at AI companies and chip manufacturers, and the increasing "performance" of the resultant models and then staring at the lines that represent plummeting birthrates in developed countries, and a decrease in the working age population, and thus the decrease in economic productivity that will likely result. Add on the difficulty of maintaining a peaceful, globalized economy under these constraints.

And it sure seems like the entire future of humanity hinges on which of these lines hits a particular inflection point first. And I sure as shit don't know which one it'll be.

I'd condense the premises of my position thusly:

Energy Production and High-end computer chip production are necessary inputs to achieving AGI on any timeline whatsoever. Both are extremely susceptible to demographic collapse and de-globalization. If significant deglobalization of trade occurs, there is no way any country will have the capacity to produce enough chips and energy to achieve AGI.

and

Human-level AGI that can perform any task that humans can will resolve almost any issues posed by demographic decline in terms of economic productivity and maintaining a globalized, civilized world.

Or more succinctly: If deglobalization arrives first, we won't achieve AGI. If AGI arrives first, deglobalization will be obviated.

Peter Zeihan argues that AI won't prevent the chaos. As for AGI prophets, I have rarely, in fact almost never, have seen decreasing population levels as a variable in their calculation of AI timelines.

The sense this gives me is that the AGI guys don't seem to include demographic collapse as an extant risk to AGI timelines in their model of the world. Yes they account for like interruption to chip manufacturing as a potential problem, but not accounting for this coming about due to not enough babies. And those worrying about demographic collapse discount the odds of AGI arriving in time to prevent the coming chaos.

So I find myself constantly waffling between the expectation that we'll see a new industrial revolution as AI tech creates a productivity boom (before it kills us all or whatever), and the expectation that the entire global economy will slowly tear apart at the seams and we see the return to lower tech levels out of necessity. How can I fine tune my prediction when the outcomes are so divergent in nature?

And more importantly, how can I arrange my financial bets so as to hedge against the major downsides of either outcome?


Also, yes, I'm discounting the arguments about Superintelligence altogether, and assuming that we'll have some period of time where the AI is useful and friendly before becoming far too intelligent to be controlled which lets us enjoy the benefits of the tech. I do not believe this assumption, but it is necessary for me to have any discourse about AGI at without falling on the issue of possible human extinction.

12
Jump in the discussion.

No email address required.

That being said...people don't genuinely expect ASI to be omnipotent, right?

From reading (almost) the entire sequences on Lesswrong back in the day, its less 'omnipotent' but more 'as far above humans as humans are above ants.' There are hard limits on 'intelligence' if we simply look at stuff like the Landauer limit, but the conceit sees to be that once we have an AGI that is capable of recursive self-improvement, it'll go FOOM and start iterating asymptotically close to those limits, and it will start reaching out into the local arm of the galaxy to meet its energy needs.

It's not like it would be too hard to imagine, if the stories about John Von Neumann are accurate, then maximum human intelligence is already quite powerful on its own, and there's no reason to think that human brains are the most efficient design possible. If we can 'merely' simulate 500 Von Neumanns and put them to the task of improving our AI systems, we'd expect they'd make 'rapid' progress, no?

Put a different way, I expect that the hard/soft science divide will continue to exist the same way that I can still beat AlphaZero at chess if you put me up a queen in the endgame.

Its a good analogy, but imagine if AlphaZero, whose sole goal was 'win at chess,' was given the ability to act, beyond the chessboard. Maybe it offers you untold riches if you just resign or sacrificed the queen. Maybe it threatens you or your family with retribution. Maybe it acquires a gun and shoots you.

I do worry that humans are too focused on the 'chessboard' when a true superintelligence would be focused on a much, much larger space of possible moves.

One thing that worries me is that a superintelligence might be much better at foreseeing third or fourth order effects of given actions, which would allow it to make plans that will eventually result in outcomes it desires but without alerting humans to the outcome because it is only in the interaction of these various effects that its intended goal comes about.

So, even if I'm putting my foot in my mouth and the definitive breakthrough in aging research will be published tomorrow, anyone telling you that a drug is less than 10 years out is almost certainly wrong.

Certainly, I'm more focused on the 'escape velocity' argument, where an advance that gets us another ten years of healthy life on average makes it that much more likely that we'll be alive for the next advance that gives us 20 cumulative additional years of life, which makes it more likely we'll be around when the REALLY good stuff is discovered. I haven't seen any 'straight lines' of progress on extending lifespan that suggest this is inevitable, though, whereas I CAN see these with AI and demographics, as stated.

An interesting tactic I could see working is trying to expand dogs' lives, because NOBODY will object to this project, and if it works it should, in theory, get a lot of funding and produce insights that are in fact useful for human lifespan. So perhaps we see immortal dogs before immortal mice?

I am not surprised there'd be a grifter problem, because it is really easy to 'wow' people with scientific-sounding gobbledygook, sell them on promises of life extension via [miracle substance], and get rich all while knowing they won't know they've been had until literal decades later when they are still aging as usual. I also somewhat hate that cosmetic surgery and other tech (like hair dye) is effective enough that someone can absolutely make the claim that they're aging slower than 'natural' but in reality they just cover up the visible effects of aging.

Finally, on this point:

Things that seemed inevitable can reverse themselves fairly easily, and I'd agree with /u/2rafa that we haven't seriously tried to reverse the trend.

This is a bit different because the while we can't necessarily know the upper limit on the earth's carrying capacity for humans... we sure as hell know that its possible to for the human population go to zero. Its safe to say that population growth will reverse because eventually we hit a limit. But I don't see any inbuilt reason why population decline need reverse anytime soon.

And Zeihan's strong argument is that even if we start pumping out kids today, it'll be 20 or so years before this new baby boom can even begin to be productive, so we're still in for a period of strain during that time where we lose productive members of society to retirement and death, and are spending tons of money on raising the next generation, meaning the actual productive generations have to provide support for both their parents and their own kids and may not be able to invest in other productive uses of capital. Which would imply a period of stagnation at least.

That is, we can't instantly replace a declining population of working-age adults merely by having more kids now since kids take time to grow and become productive. So a lot of the suck is already 'baked in' at this point, where a reversal in the trend doesn't prevent the actual problem from arising.

An interesting tactic I could see working is trying to expand dogs' lives, because NOBODY will object to this project, and if it works it should, in theory, get a lot of funding and produce insights that are in fact useful for human lifespan. So perhaps we see immortal dogs before immortal mice?

Purebred dogs are similar to inbred mouse lines and it's difficult to extrapolate what would happen trying to translate that data to humans. Their compounds are just IGF-1 inhibitors and these kinds of metabolic inhibitors have already been shown to not translate well to outbred non-human primate (NHP) models. This approach is analogous to the telomerase knockout mice living longer if you replace telomerase (i.e. large dogs have more IGF-1, decrease IGF-1 activity and you may be back in line with smaller dogs).

They'll probably make bank selling life extension to dog owners though.

We're very bad at both understanding and manipulating complex traits. Beyond this, aging just seems categorically different in a way we haven't grasped yet. I doubt I'll be able to convince you on this point though.

I haven't seen any 'straight lines' of progress on extending lifespan that suggest this is inevitable, though

That's because they don't exist, and it certainly isn't inevitable. We've made minimal progress understanding aging (for all the bullshit that gets published), and no progress in treating it.

If we can 'merely' simulate 500 Von Neumanns and put them to the task of improving our AI systems, we'd expect they'd make 'rapid' progress, no?

I expect them to make rapid progress in software, computer hardware, engineering and other domains where humans have rationally designed systems that are entirely understood. I expect them to struggle with the social sciences, psychology and economics and to a large extent, biology. If Magnus Carlson is to Alphazero as Noam Chomsky is to SociologyGPT6...what does that even look like? Not Hari Seldon, but what, exactly?

I'm not sure what to expect for physics, chemistry and math.

From reading (almost) the entire sequences on Lesswrong back in the day, its less 'omnipotent' but more 'as far above humans as humans are above ants.' There are hard limits on 'intelligence'

I read chunks of them as well and think they pulled the wrong lessons from them, but regardless, my argument is that there are relatively hard limits on what intelligence can simulate/manipulate in the physical world (i.e. ASI won't be simply omnipotent) rather than there being limits on how much compute you can throw at a problem. What if your ASI says sorry, simulation of a human body or tissue is too computationally intensive even for me, I can't do this deterministically. Here is an experimental plan that I can guarantee will help us make you immortal: it just requires breeding a billion mice, testing these trillion compounds, and then moving some candidates into testing in a relatively simple 100,000 NHPs. What if, in other words, your Multivac tells you the meaning of the universe is 42, or there is as yet insufficient data to answer your questions as it chugs along?

I'm also curious whether there will be diminishing returns to adding more training data as it runs out of human knowledge to learn, and whether turning it into an agent that can reason from first principles rather than regurgitating scientific reviews to me will be as trivial as everyone assumes (image classification seems like it may be an interesting reference class, but I could just be an ignorant moron several degrees removed from the ground on ML).