As of late I've really lost any deep interest in any culture war issues. I still enjoy talking about them, but even the 'actually important' matters like Trump's trials and possible re-election or the latest Supreme Court cases or the roiling racial tensions of the current era seem to be sideshows at best compared to the two significant developments which stand to have greater impact than almost all other matters combined:
-
Humanity is seemingly a hop, skip, and/or jump away from emergence of true AGI.
-
Humanity is also locked into a demographic decline that will eventually disrupt the stable global order and world economy. No solutions tried so far have worked or even shown promise. It may be too late for such solutions to prevent the decline.
I do conserve some amount of interest for the chance that SpaceX is going to jump start industry in low earth orbit, and for longevity/anti-aging science which seems poised for some large leaps. Yet, the issues of declining human population and its downstream effect on globalization as well as the potential for human level machine intelligence seem to utterly overshadow almost any other issue we could discuss, short of World War III or the appearance of another pandemic.
And these topics are getting mainstream attention as well. There's finally space to discuss the topics of smarter-than-human AI and less-fertile-than-panda humans in less niche forums and actual news stories that start raising questions.
I recently read the Situational Awareness report by Leopold Aschenbrenner, which is a matter-of-fact update on where things absolutely seem to be heading if straight lines continue to be straight for the next few years. I find it convincing if not compelling, but the argument that we might hit AGI around 2027 (with large error bars) no longer appears absurd. This is the first time I've read a decent attempt at extrapolating out when we could actually expect to encounter the "oh shit" moment when a computer is clearly able to outperform humans not just in limited domains, but across the board.
As for the collapsed birthrates, Peter Zeihan has been the most 'level-headed' of the prognosticators here. Once again, I find it fairly convincing, but also compelling that as we end up with far too few working-age, productive citizens trying to hold up civilization as the older generations age into retirement and switch to full-time consumption. Once again you only have to believe that straight lines will keep going straight to believe that this outcome is approaching in the near future years. The full argument is more complex.
The one thing that tickles me, however, is how these two 'inevitable' results are intrinsically related! AI + robotics offers a handy method to boost productivity even as your population ages. On the negative side, only a highly wealthy, productive, educated, and globalized civilization can produce the high technology that enables current AI advances. The Aschenbrenner report up there unironically expects that 100's of millions of chips will be brought online and that global electricity production will increase by 10% before 2030ish. Anything that might interrupt chip production puts a kink in these AGI timelines. If demographic changes have as much of an impact as Zeihan suggests, it could push them back beyond the current century unless there's another route to producing all the compute and power the training runs will require.
So I find myself staring at the lines representing the increasing size of LLMs, the increasing amount of compute being deployed, the increasing funding being thrown at AI companies and chip manufacturers, and the increasing "performance" of the resultant models and then staring at the lines that represent plummeting birthrates in developed countries, and a decrease in the working age population, and thus the decrease in economic productivity that will likely result. Add on the difficulty of maintaining a peaceful, globalized economy under these constraints.
And it sure seems like the entire future of humanity hinges on which of these lines hits a particular inflection point first. And I sure as shit don't know which one it'll be.
I'd condense the premises of my position thusly:
Energy Production and High-end computer chip production are necessary inputs to achieving AGI on any timeline whatsoever. Both are extremely susceptible to demographic collapse and de-globalization. If significant deglobalization of trade occurs, there is no way any country will have the capacity to produce enough chips and energy to achieve AGI.
and
Human-level AGI that can perform any task that humans can will resolve almost any issues posed by demographic decline in terms of economic productivity and maintaining a globalized, civilized world.
Or more succinctly: If deglobalization arrives first, we won't achieve AGI. If AGI arrives first, deglobalization will be obviated.
Peter Zeihan argues that AI won't prevent the chaos. As for AGI prophets, I have rarely, in fact almost never, have seen decreasing population levels as a variable in their calculation of AI timelines.
The sense this gives me is that the AGI guys don't seem to include demographic collapse as an extant risk to AGI timelines in their model of the world. Yes they account for like interruption to chip manufacturing as a potential problem, but not accounting for this coming about due to not enough babies. And those worrying about demographic collapse discount the odds of AGI arriving in time to prevent the coming chaos.
So I find myself constantly waffling between the expectation that we'll see a new industrial revolution as AI tech creates a productivity boom (before it kills us all or whatever), and the expectation that the entire global economy will slowly tear apart at the seams and we see the return to lower tech levels out of necessity. How can I fine tune my prediction when the outcomes are so divergent in nature?
And more importantly, how can I arrange my financial bets so as to hedge against the major downsides of either outcome?
Also, yes, I'm discounting the arguments about Superintelligence altogether, and assuming that we'll have some period of time where the AI is useful and friendly before becoming far too intelligent to be controlled which lets us enjoy the benefits of the tech. I do not believe this assumption, but it is necessary for me to have any discourse about AGI at without falling on the issue of possible human extinction.
Jump in the discussion.
No email address required.
Notes -
I'm not advocating omniscience or omnipotence but I reckon the power-scaling goes very high. In sci-fi, Time Lords >>> Xeelee >>> Star Wars > Mass Effect.
All of them could stomp us without much effort. AI doesn't need to be omnipotent to subjugate or destroy humanity, it only needs to be significantly stronger. Even if it can't predict a chaotic system like weather, it probably could control the large-scale outcomes by cloudseeding or manipulating reflection/absorption of heat. Even if it can't predict the economy, it could hack stock exchanges or create outcomes directly by founding new companies/distributing powerful technology.
I too agree that it's no good giving up before all the cards are revealed. But I don't think that pushing back against omnipotent AI is beneficial. It doesn't matter if the AI doesn't know how to achieve FTL travel, or if it can't reach the fundamental limits of energy/compute with some black-hole plasma abomination. That's not needed to beat us.
From our point of view, superintelligence will be unbeatable, it may as well be omnipotent. I am highly confident that we won't be able to detect its presence or malignity until it's far too late. Once we do, we won't be able to act quickly enough to deal with it or even communicate securely. If we even get to fighting, I expect it to run rings around our forces tactically and strategically. Maybe that's nanomachines or mosquito-robot swarms or more conventional killbots and nukes. Maybe it's over in hours, maybe it takes a slow and steady path, acting via proxies and deceit to cautiously advance its goals.
More options
Context Copy link