site banner

Dueling Lines: Will AGI Arrive Before Demographic-Induced Deglobalization?

As of late I've really lost any deep interest in any culture war issues. I still enjoy talking about them, but even the 'actually important' matters like Trump's trials and possible re-election or the latest Supreme Court cases or the roiling racial tensions of the current era seem to be sideshows at best compared to the two significant developments which stand to have greater impact than almost all other matters combined:

  1. Humanity is seemingly a hop, skip, and/or jump away from emergence of true AGI.

  2. Humanity is also locked into a demographic decline that will eventually disrupt the stable global order and world economy. No solutions tried so far have worked or even shown promise. It may be too late for such solutions to prevent the decline.

I do conserve some amount of interest for the chance that SpaceX is going to jump start industry in low earth orbit, and for longevity/anti-aging science which seems poised for some large leaps. Yet, the issues of declining human population and its downstream effect on globalization as well as the potential for human level machine intelligence seem to utterly overshadow almost any other issue we could discuss, short of World War III or the appearance of another pandemic.

And these topics are getting mainstream attention as well. There's finally space to discuss the topics of smarter-than-human AI and less-fertile-than-panda humans in less niche forums and actual news stories that start raising questions.

I recently read the Situational Awareness report by Leopold Aschenbrenner, which is a matter-of-fact update on where things absolutely seem to be heading if straight lines continue to be straight for the next few years. I find it convincing if not compelling, but the argument that we might hit AGI around 2027 (with large error bars) no longer appears absurd. This is the first time I've read a decent attempt at extrapolating out when we could actually expect to encounter the "oh shit" moment when a computer is clearly able to outperform humans not just in limited domains, but across the board.

As for the collapsed birthrates, Peter Zeihan has been the most 'level-headed' of the prognosticators here. Once again, I find it fairly convincing, but also compelling that as we end up with far too few working-age, productive citizens trying to hold up civilization as the older generations age into retirement and switch to full-time consumption. Once again you only have to believe that straight lines will keep going straight to believe that this outcome is approaching in the near future years. The full argument is more complex.

The one thing that tickles me, however, is how these two 'inevitable' results are intrinsically related! AI + robotics offers a handy method to boost productivity even as your population ages. On the negative side, only a highly wealthy, productive, educated, and globalized civilization can produce the high technology that enables current AI advances. The Aschenbrenner report up there unironically expects that 100's of millions of chips will be brought online and that global electricity production will increase by 10% before 2030ish. Anything that might interrupt chip production puts a kink in these AGI timelines. If demographic changes have as much of an impact as Zeihan suggests, it could push them back beyond the current century unless there's another route to producing all the compute and power the training runs will require.

So I find myself staring at the lines representing the increasing size of LLMs, the increasing amount of compute being deployed, the increasing funding being thrown at AI companies and chip manufacturers, and the increasing "performance" of the resultant models and then staring at the lines that represent plummeting birthrates in developed countries, and a decrease in the working age population, and thus the decrease in economic productivity that will likely result. Add on the difficulty of maintaining a peaceful, globalized economy under these constraints.

And it sure seems like the entire future of humanity hinges on which of these lines hits a particular inflection point first. And I sure as shit don't know which one it'll be.

I'd condense the premises of my position thusly:

Energy Production and High-end computer chip production are necessary inputs to achieving AGI on any timeline whatsoever. Both are extremely susceptible to demographic collapse and de-globalization. If significant deglobalization of trade occurs, there is no way any country will have the capacity to produce enough chips and energy to achieve AGI.

and

Human-level AGI that can perform any task that humans can will resolve almost any issues posed by demographic decline in terms of economic productivity and maintaining a globalized, civilized world.

Or more succinctly: If deglobalization arrives first, we won't achieve AGI. If AGI arrives first, deglobalization will be obviated.

Peter Zeihan argues that AI won't prevent the chaos. As for AGI prophets, I have rarely, in fact almost never, have seen decreasing population levels as a variable in their calculation of AI timelines.

The sense this gives me is that the AGI guys don't seem to include demographic collapse as an extant risk to AGI timelines in their model of the world. Yes they account for like interruption to chip manufacturing as a potential problem, but not accounting for this coming about due to not enough babies. And those worrying about demographic collapse discount the odds of AGI arriving in time to prevent the coming chaos.

So I find myself constantly waffling between the expectation that we'll see a new industrial revolution as AI tech creates a productivity boom (before it kills us all or whatever), and the expectation that the entire global economy will slowly tear apart at the seams and we see the return to lower tech levels out of necessity. How can I fine tune my prediction when the outcomes are so divergent in nature?

And more importantly, how can I arrange my financial bets so as to hedge against the major downsides of either outcome?


Also, yes, I'm discounting the arguments about Superintelligence altogether, and assuming that we'll have some period of time where the AI is useful and friendly before becoming far too intelligent to be controlled which lets us enjoy the benefits of the tech. I do not believe this assumption, but it is necessary for me to have any discourse about AGI at without falling on the issue of possible human extinction.

10
Jump in the discussion.

No email address required.

Humanity is also locked into a demographic decline that will eventually disrupt the stable global order and world economy. No solutions tried so far have worked or even shown promise. It may be too late for such solutions to prevent the decline.

150 years ago we were thought to be locked into a Malthusian explosion whereby widespread famine and war were inevitable as we bred like moties and exhausted our resources. Things that seemed inevitable can reverse themselves fairly easily, and I'd agree with /u/2rafa that we haven't seriously tried to reverse the trend. People respond to incentives, and if the current regime incentivizes DINKs, there's no reason we can't create a new one that punishes them. If nothing else, childfree people are greatly outnumbered by people with children.

for longevity/anti-aging science which seems poised for some large leaps

Don't hold your breath my man. The longevity/anti-aging field (if I can be permitted to throw a bit of shade for a moment) suffers from a profound lack of talent and attention from the wider scientific community. First, consider that any new discovery takes 10-15 years to be translated into the clinic (see: CRISPR 'discovered' for realsies in 2012, first clinical trials in people started in the early 2020s). So, even if I'm putting my foot in my mouth and the definitive breakthrough in aging research will be published tomorrow, anyone telling you that a drug is less than 10 years out is almost certainly wrong. If you show me the first immortal mouse, I'll get excited and think that maybe we could translate a human drug in 5-10 years, although even then we often fail! (see: Alzheimer's, MS, most oncology drugs)

Second, consider that whatever neo-Rasputin tells you, we genuinely have no clue how aging works, let alone how to manipulate it productively. All the conjecture about seven forms of cell damage will remain conjecture until someone actually manipulates any of those things, and makes an otherwise healthy, wild-type mouse live significantly longer. Rapamycin/caloric restriction probably doesn't clear that bar (see section on CR) and doesn't work in higher mammals for that or some other reason, and putting telomerase back into a mouse with progeria certainly does not.

Thirdly, consider that the academics in the space suffer from a profound lack of ambition/vision, while those who have either are, unfortunately, grifters. See: Calico, which launched with 3.5 billion (massive for a biotech):

We are not a traditional biotechnology company, nor are we an academic institution. We have combined the best parts of both without the constraints of either.

Their ALS drug is interesting, but in the last 11 years most of what they've produced is more academic naked mole rat sequencing papers, plus what looks like a pivot into oncology and 'age-related diseases' rather than aging. I'd elaborate on the grifter side, but that would probably ruffle too many feathers to be worth it.

I recently read the Situational Awareness report by Leopold Aschenbrenner, which is a matter-of-fact update on where things absolutely seem to be heading if straight lines continue to be straight for the next few years.

I recently read it too, and listened to the >4 hour podcast. It's certainly interesting, and I won't pretend to be in a position to judge any of the content regarding AI/ML which is far outside my wheelhouse.

That being said...people don't genuinely expect ASI to be omnipotent, right? Like, I assume Hari Seldon psychohistory-level AI just isn't possible, or is far enough away to be irrelevant. I also expect that the abilities of ASI to manipulate nature will mirror our own. That is to say, I expect them to be god-tier engineers and coders, but while I expect them to be capable of running circles around people in the stock markets, generating a flawless model of the economy that can predict any event seems virtually impossible. Put a different way, I expect that the hard/soft science divide will continue to exist the same way that I can still beat AlphaZero at chess if you put me up a queen in the endgame.

All that to say, when I try to use AI today for biology research it's strictly limited by what we already know. If I ask it for novel theories about aging, it spits out word salad that I can read in any old review on Pubmed rather than modeling the world from the ground up and generating a new hypothesis. Perhaps this is one of the 'unhobblings' Mr. Aschenbrenner references, or perhaps the models I have access to have been RLHF'd away from hallucinating anything interesting, but it's not clear to me how throwing more Pubmed articles into the training data set is going to address this problem. And even then, if some emergent quality enables it to piece together a broad worldview, it's not clear to me how the eschatological diamondoid-bacteria scenario is possible (setting aside how dumb an idea diamondoid bacteria are compared to much easier options for eradicating humans). Molecular dynamics simulations just seem too computationally intensive to do in silico experiments deterministically (although I'm not super knowledgeable about this field and would be curious if anyone else here has any input) at the cellular level, which requires some black boxing, which requires empirical experiments in the lab...

To be clear, I'm bullish on AI and even bullish on AI in biology. Nothing would make me happier than some godlike AI oracle that could satisfy my curiosity, I'm just skeptical that even the ASIs pitched by proponents like Aschenbrenner will be as omnipotent as advertised.

The Aschenbrenner thesis relies upon AI doing the work of AI researchers and recursively self-improving. That's a realm of pure software development, very alien to human intuition where machines might get more traction.

Maybe there are ways to speed up molecular modelling, maybe there's some hack that needs 10 years of uninterrupted research from a 240 IQ ubermensch to find. How could we know, we're not super-geniuses!

The Aschenbrenner thesis relies upon AI doing the work of AI researchers and recursively self-improving. That's a realm of pure software development, very alien to human intuition where machines might get more traction.

I'd agree.

Maybe there are ways to speed up molecular modelling, maybe there's some hack that needs 10 years of uninterrupted research from a 240 IQ ubermensch to find. How could we know, we're not super-geniuses!

Maybe. Maybe the massive concentration of intelligence in such a small area will lead the ASI to transcend this reality, if you want to retreat to absolute agnosticism about what ASI will be capable of. Is your position that ASI will functionally be omnipotent (or close enough that it's irrelevant to us), or that the boundaries of what it is capable of even in the near-term are unknowable but we should expect it to be capable of achieving more or less any goal?

To be clear, I don't even think it's necessarily a bad position to take when the error bars and uncertainties involved are so large. But I maintain that:

  1. People who are absolutely confident, to the point of telling others not to bother having children, that humanity is doomed due to paperclipping by omnipotent ASI.
  2. We should at least recognize that some things are hard to impossible regardless of intelligence/compute (psychohistory, precise weather prediction a year out, etc), and that there exists a gradient of difficulty for ASI as well as us. I expect that if things pan out as Aschenbrenner and Eliezer predict we'll undoubtedly be surprised in some of these areas if they discover shortcuts and workarounds that we haven't, but others have to be actual hard problems unless you're arguing in favor of omnipotence.
  3. I'm skeptical that all of these 'unhobblings' or making the leap from LLMs to godlike ASI is as trivial as many seem to expect. It certainly seems possible that the chasm could be as large as that from the earliest attempts at image classifiers in the 1960s to progress in the 2000s.

All that said, I freely admit this is far from my realm of expertise and I'll happily admit my errors if the next generation of Gemini produces an immortality pill.

People who are absolutely confident, to the point of telling others not to bother having children, that humanity is doomed due to paperclipping by omnipotent ASI.

This is not a sentence (there's no conjugated verb outside a relative clause); it's a description of a certain type of person. Did you perhaps omit something from it, such as ", are wrong" at the end?

We should at least recognize that some things are hard to impossible regardless of intelligence/compute (psychohistory, precise weather prediction a year out, etc)

No, precise weather prediction a year out is very much a compute problem. The problem is that sub-grid-scale phenomena can't be simulated fully according to physics and must be parametrised, introducing errors, and the nucleation and coalescence of cloud droplets - which is key to tropospheric dynamics - is at a micron or hundreds-of-nanometres scale so having a grid that small is computationally infeasible. I think you need something like 20-30 more OOMs of compute compared to what we currently use, but the point is that there exists a finite threshold at which your weather forecasts suddenly become exceptionally accurate (though you do, of course, also need quite precise data to put into it).

This is not a sentence (there's no conjugated verb outside a relative clause); it's a description of a certain type of person. Did you perhaps omit something from it, such as ", are wrong" at the end?

Yes, my apologies. Not necessarily 'are wrong' on the object level, but wrong in their overconfidence.

No, precise weather prediction a year out is very much a compute problem...I think you need something like 20-30 more OOMs of compute compared to what we currently use, but the point is that there exists a finite threshold at which your weather forecasts suddenly become exceptionally accurate (though you do, of course, also need quite precise data to put into it).

Sure, but from that perspective, psychohistory and economics are also very much compute problems, no? You just have to be able to accurately simulate every atom in every person involved. Chess is a compute problem too. But waving your hands and saying ASI will be able to fab a bunch of killbot drones (presumably already possible) but also be capable of precise weather prediction years out elides the fact that the latter requires an increase in our computational resources of 20-30 OOMs. Unless you're proposing such a hard takeoff that your ASI is virtually omnipotent overnight and capable of conjuring arbitrary amounts of compute at will, shouldn't there be a distinction between 'hard' and 'easy' problems? Barring shortcuts that decrease the computational difficulty of a given problem by many OOMs, but again, presumably those don't exist in every case, right? And even discovering those shortcuts could be 'hard' problems in and of themselves, dependent on other advances.

Yes, my apologies. Not necessarily 'are wrong' on the object level, but wrong in their overconfidence.

Thanks for the clarification. Yes, I certainly agree they're wrong in their overconfidence; you need all three of "metastable Butlerian Jihad can't happen even considering warning shots or WWIII interrupt", "NNAGI can't be aligned" and "unaligned AGI can definitely destroy all humans", and the first two are beyond anyone's ability to predict with the 99.9%+ confidence you need for "don't even bother trying long-term things".

(My P(AGI Doom) is under 50%, because while I consider the third practically proven and the second extremely likely, I think Butlerian Jihad is actually pretty likely - AI that probably can't be aligned is not a Prisoner's Dilemma but rather a Stag Hunt or even Concord, and neural nets seem like they could plausibly be non-RSI enough to allow for warning shots.)

Sure, but from that perspective, psychohistory and economics are also very much compute problems, no? You just have to be able to accurately simulate every atom in every person involved.

Technically, no, there are other limits in play there. Simulating every atom runs into quantum limits on determinism, and there's the halting-problem/Godel-incompleteness issue where you're trying to predict a system that can potentially condition on your predictor's output (if you ask a predictor to predict whether I'll choose X or Y, but I can see the prediction before choosing, I can perform the strategy of "choose X if predictor says Y; choose Y if predictor says X" and thus thwart the predictor). Technically, both of those do apply to "predict the weather with perfect accuracy" (the latter because your computer itself and/or people acting on your forecast generate heat, CO2, etc.), but AIUI you could probably get to a year without the perturbations from those wrecking everything.

(Also, even if you handwave the quantum problem, there are a lot more OOMs involved in atom-perfect simulation, enough such that even at theoretical limits you're talking about a computer bigger than the system being simulated.)

But waving your hands and saying ASI will be able to fab a bunch of killbot drones (presumably already possible) but also be capable of precise weather prediction years out elides the fact that the latter requires an increase in our computational resources of 20-30 OOMs.

No, it requires an increase in the computational resources allocated to the problem of 20-30 OOMs; the difference is that AIUI not very much of current compute is used for weather prediction (because MOAR COMPUTE is pretty marginal in effectiveness until you get to the point of not having to parametrise coalescence, and within certain bounds the weather's not all that important), so the hardware for a few of those OOMs already actually exists.

That's all really interesting! Thanks a lot for the explanation, I appreciate it.

I'm not advocating omniscience or omnipotence but I reckon the power-scaling goes very high. In sci-fi, Time Lords >>> Xeelee >>> Star Wars > Mass Effect.

All of them could stomp us without much effort. AI doesn't need to be omnipotent to subjugate or destroy humanity, it only needs to be significantly stronger. Even if it can't predict a chaotic system like weather, it probably could control the large-scale outcomes by cloudseeding or manipulating reflection/absorption of heat. Even if it can't predict the economy, it could hack stock exchanges or create outcomes directly by founding new companies/distributing powerful technology.

I too agree that it's no good giving up before all the cards are revealed. But I don't think that pushing back against omnipotent AI is beneficial. It doesn't matter if the AI doesn't know how to achieve FTL travel, or if it can't reach the fundamental limits of energy/compute with some black-hole plasma abomination. That's not needed to beat us.

From our point of view, superintelligence will be unbeatable, it may as well be omnipotent. I am highly confident that we won't be able to detect its presence or malignity until it's far too late. Once we do, we won't be able to act quickly enough to deal with it or even communicate securely. If we even get to fighting, I expect it to run rings around our forces tactically and strategically. Maybe that's nanomachines or mosquito-robot swarms or more conventional killbots and nukes. Maybe it's over in hours, maybe it takes a slow and steady path, acting via proxies and deceit to cautiously advance its goals.

That being said...people don't genuinely expect ASI to be omnipotent, right?

From reading (almost) the entire sequences on Lesswrong back in the day, its less 'omnipotent' but more 'as far above humans as humans are above ants.' There are hard limits on 'intelligence' if we simply look at stuff like the Landauer limit, but the conceit sees to be that once we have an AGI that is capable of recursive self-improvement, it'll go FOOM and start iterating asymptotically close to those limits, and it will start reaching out into the local arm of the galaxy to meet its energy needs.

It's not like it would be too hard to imagine, if the stories about John Von Neumann are accurate, then maximum human intelligence is already quite powerful on its own, and there's no reason to think that human brains are the most efficient design possible. If we can 'merely' simulate 500 Von Neumanns and put them to the task of improving our AI systems, we'd expect they'd make 'rapid' progress, no?

Put a different way, I expect that the hard/soft science divide will continue to exist the same way that I can still beat AlphaZero at chess if you put me up a queen in the endgame.

Its a good analogy, but imagine if AlphaZero, whose sole goal was 'win at chess,' was given the ability to act, beyond the chessboard. Maybe it offers you untold riches if you just resign or sacrificed the queen. Maybe it threatens you or your family with retribution. Maybe it acquires a gun and shoots you.

I do worry that humans are too focused on the 'chessboard' when a true superintelligence would be focused on a much, much larger space of possible moves.

One thing that worries me is that a superintelligence might be much better at foreseeing third or fourth order effects of given actions, which would allow it to make plans that will eventually result in outcomes it desires but without alerting humans to the outcome because it is only in the interaction of these various effects that its intended goal comes about.

So, even if I'm putting my foot in my mouth and the definitive breakthrough in aging research will be published tomorrow, anyone telling you that a drug is less than 10 years out is almost certainly wrong.

Certainly, I'm more focused on the 'escape velocity' argument, where an advance that gets us another ten years of healthy life on average makes it that much more likely that we'll be alive for the next advance that gives us 20 cumulative additional years of life, which makes it more likely we'll be around when the REALLY good stuff is discovered. I haven't seen any 'straight lines' of progress on extending lifespan that suggest this is inevitable, though, whereas I CAN see these with AI and demographics, as stated.

An interesting tactic I could see working is trying to expand dogs' lives, because NOBODY will object to this project, and if it works it should, in theory, get a lot of funding and produce insights that are in fact useful for human lifespan. So perhaps we see immortal dogs before immortal mice?

I am not surprised there'd be a grifter problem, because it is really easy to 'wow' people with scientific-sounding gobbledygook, sell them on promises of life extension via [miracle substance], and get rich all while knowing they won't know they've been had until literal decades later when they are still aging as usual. I also somewhat hate that cosmetic surgery and other tech (like hair dye) is effective enough that someone can absolutely make the claim that they're aging slower than 'natural' but in reality they just cover up the visible effects of aging.

Finally, on this point:

Things that seemed inevitable can reverse themselves fairly easily, and I'd agree with /u/2rafa that we haven't seriously tried to reverse the trend.

This is a bit different because the while we can't necessarily know the upper limit on the earth's carrying capacity for humans... we sure as hell know that its possible to for the human population go to zero. Its safe to say that population growth will reverse because eventually we hit a limit. But I don't see any inbuilt reason why population decline need reverse anytime soon.

And Zeihan's strong argument is that even if we start pumping out kids today, it'll be 20 or so years before this new baby boom can even begin to be productive, so we're still in for a period of strain during that time where we lose productive members of society to retirement and death, and are spending tons of money on raising the next generation, meaning the actual productive generations have to provide support for both their parents and their own kids and may not be able to invest in other productive uses of capital. Which would imply a period of stagnation at least.

That is, we can't instantly replace a declining population of working-age adults merely by having more kids now since kids take time to grow and become productive. So a lot of the suck is already 'baked in' at this point, where a reversal in the trend doesn't prevent the actual problem from arising.

An interesting tactic I could see working is trying to expand dogs' lives, because NOBODY will object to this project, and if it works it should, in theory, get a lot of funding and produce insights that are in fact useful for human lifespan. So perhaps we see immortal dogs before immortal mice?

Purebred dogs are similar to inbred mouse lines and it's difficult to extrapolate what would happen trying to translate that data to humans. Their compounds are just IGF-1 inhibitors and these kinds of metabolic inhibitors have already been shown to not translate well to outbred non-human primate (NHP) models. This approach is analogous to the telomerase knockout mice living longer if you replace telomerase (i.e. large dogs have more IGF-1, decrease IGF-1 activity and you may be back in line with smaller dogs).

They'll probably make bank selling life extension to dog owners though.

We're very bad at both understanding and manipulating complex traits. Beyond this, aging just seems categorically different in a way we haven't grasped yet. I doubt I'll be able to convince you on this point though.

I haven't seen any 'straight lines' of progress on extending lifespan that suggest this is inevitable, though

That's because they don't exist, and it certainly isn't inevitable. We've made minimal progress understanding aging (for all the bullshit that gets published), and no progress in treating it.

If we can 'merely' simulate 500 Von Neumanns and put them to the task of improving our AI systems, we'd expect they'd make 'rapid' progress, no?

I expect them to make rapid progress in software, computer hardware, engineering and other domains where humans have rationally designed systems that are entirely understood. I expect them to struggle with the social sciences, psychology and economics and to a large extent, biology. If Magnus Carlson is to Alphazero as Noam Chomsky is to SociologyGPT6...what does that even look like? Not Hari Seldon, but what, exactly?

I'm not sure what to expect for physics, chemistry and math.

From reading (almost) the entire sequences on Lesswrong back in the day, its less 'omnipotent' but more 'as far above humans as humans are above ants.' There are hard limits on 'intelligence'

I read chunks of them as well and think they pulled the wrong lessons from them, but regardless, my argument is that there are relatively hard limits on what intelligence can simulate/manipulate in the physical world (i.e. ASI won't be simply omnipotent) rather than there being limits on how much compute you can throw at a problem. What if your ASI says sorry, simulation of a human body or tissue is too computationally intensive even for me, I can't do this deterministically. Here is an experimental plan that I can guarantee will help us make you immortal: it just requires breeding a billion mice, testing these trillion compounds, and then moving some candidates into testing in a relatively simple 100,000 NHPs. What if, in other words, your Multivac tells you the meaning of the universe is 42, or there is as yet insufficient data to answer your questions as it chugs along?

I'm also curious whether there will be diminishing returns to adding more training data as it runs out of human knowledge to learn, and whether turning it into an agent that can reason from first principles rather than regurgitating scientific reviews to me will be as trivial as everyone assumes (image classification seems like it may be an interesting reference class, but I could just be an ignorant moron several degrees removed from the ground on ML).