As of late I've really lost any deep interest in any culture war issues. I still enjoy talking about them, but even the 'actually important' matters like Trump's trials and possible re-election or the latest Supreme Court cases or the roiling racial tensions of the current era seem to be sideshows at best compared to the two significant developments which stand to have greater impact than almost all other matters combined:
-
Humanity is seemingly a hop, skip, and/or jump away from emergence of true AGI.
-
Humanity is also locked into a demographic decline that will eventually disrupt the stable global order and world economy. No solutions tried so far have worked or even shown promise. It may be too late for such solutions to prevent the decline.
I do conserve some amount of interest for the chance that SpaceX is going to jump start industry in low earth orbit, and for longevity/anti-aging science which seems poised for some large leaps. Yet, the issues of declining human population and its downstream effect on globalization as well as the potential for human level machine intelligence seem to utterly overshadow almost any other issue we could discuss, short of World War III or the appearance of another pandemic.
And these topics are getting mainstream attention as well. There's finally space to discuss the topics of smarter-than-human AI and less-fertile-than-panda humans in less niche forums and actual news stories that start raising questions.
I recently read the Situational Awareness report by Leopold Aschenbrenner, which is a matter-of-fact update on where things absolutely seem to be heading if straight lines continue to be straight for the next few years. I find it convincing if not compelling, but the argument that we might hit AGI around 2027 (with large error bars) no longer appears absurd. This is the first time I've read a decent attempt at extrapolating out when we could actually expect to encounter the "oh shit" moment when a computer is clearly able to outperform humans not just in limited domains, but across the board.
As for the collapsed birthrates, Peter Zeihan has been the most 'level-headed' of the prognosticators here. Once again, I find it fairly convincing, but also compelling that as we end up with far too few working-age, productive citizens trying to hold up civilization as the older generations age into retirement and switch to full-time consumption. Once again you only have to believe that straight lines will keep going straight to believe that this outcome is approaching in the near future years. The full argument is more complex.
The one thing that tickles me, however, is how these two 'inevitable' results are intrinsically related! AI + robotics offers a handy method to boost productivity even as your population ages. On the negative side, only a highly wealthy, productive, educated, and globalized civilization can produce the high technology that enables current AI advances. The Aschenbrenner report up there unironically expects that 100's of millions of chips will be brought online and that global electricity production will increase by 10% before 2030ish. Anything that might interrupt chip production puts a kink in these AGI timelines. If demographic changes have as much of an impact as Zeihan suggests, it could push them back beyond the current century unless there's another route to producing all the compute and power the training runs will require.
So I find myself staring at the lines representing the increasing size of LLMs, the increasing amount of compute being deployed, the increasing funding being thrown at AI companies and chip manufacturers, and the increasing "performance" of the resultant models and then staring at the lines that represent plummeting birthrates in developed countries, and a decrease in the working age population, and thus the decrease in economic productivity that will likely result. Add on the difficulty of maintaining a peaceful, globalized economy under these constraints.
And it sure seems like the entire future of humanity hinges on which of these lines hits a particular inflection point first. And I sure as shit don't know which one it'll be.
I'd condense the premises of my position thusly:
Energy Production and High-end computer chip production are necessary inputs to achieving AGI on any timeline whatsoever. Both are extremely susceptible to demographic collapse and de-globalization. If significant deglobalization of trade occurs, there is no way any country will have the capacity to produce enough chips and energy to achieve AGI.
and
Human-level AGI that can perform any task that humans can will resolve almost any issues posed by demographic decline in terms of economic productivity and maintaining a globalized, civilized world.
Or more succinctly: If deglobalization arrives first, we won't achieve AGI. If AGI arrives first, deglobalization will be obviated.
Peter Zeihan argues that AI won't prevent the chaos. As for AGI prophets, I have rarely, in fact almost never, have seen decreasing population levels as a variable in their calculation of AI timelines.
The sense this gives me is that the AGI guys don't seem to include demographic collapse as an extant risk to AGI timelines in their model of the world. Yes they account for like interruption to chip manufacturing as a potential problem, but not accounting for this coming about due to not enough babies. And those worrying about demographic collapse discount the odds of AGI arriving in time to prevent the coming chaos.
So I find myself constantly waffling between the expectation that we'll see a new industrial revolution as AI tech creates a productivity boom (before it kills us all or whatever), and the expectation that the entire global economy will slowly tear apart at the seams and we see the return to lower tech levels out of necessity. How can I fine tune my prediction when the outcomes are so divergent in nature?
And more importantly, how can I arrange my financial bets so as to hedge against the major downsides of either outcome?
Also, yes, I'm discounting the arguments about Superintelligence altogether, and assuming that we'll have some period of time where the AI is useful and friendly before becoming far too intelligent to be controlled which lets us enjoy the benefits of the tech. I do not believe this assumption, but it is necessary for me to have any discourse about AGI at without falling on the issue of possible human extinction.
Jump in the discussion.
No email address required.
Notes -
One of the 'hedges' against AI job loss I inadvertently made over the past several years was becoming a self-defense instructor, which is almost entirely dependent on being physically dexterous in the real world.
Naively, I'd imagine it will be a longer time before there's robots that are able to teach and demonstrate martial arts techniques, especially when teaching them requires physically interacting with and throwing other humans around, because a human needs to learn from an instructor that is analogous enough to a human that they can easily imitate their motions.
So yeah, I've noticed the massive differential between how effectively current AIs and LLMs manipulate bits vs. atoms. The big one being the fact that full self-driving cars are still struggling to navigate a vehicle around in the real world, which is a skill many humans develop by age 18.
But this seems like one of those obstacles that will seem insurmountable until suddenly it is not.
I think my own personal bellwether on this issue is when Autonomous Formula 1 cars start beating human drivers, I'll notice, and worry.
The current state, however:
https://www.theverge.com/2024/4/27/24142989/a2rl-autonomous-race-cars-f1-abu-dhabi
https://a2rl.io/news/28/man-beat-machine-in-the-first-human-autonomous-car-race
https://techcrunch.com/2024/04/30/inside-autonomous-racing-league-event-self-driving-car-against-formula-1-driver/
I do think that AI proponents seem... premature to crow how powerful their creations are when they aren't very good at making things actually happen in the real world.
In industrial robotics, there's two ways you get consistency, reliability, efficiency, and speed:
Manufacturing automation designs machines that break complex assembly problems into many separate sub-problems that can be solved by simple motions under a strict set of inputs. Much of the complexity of these machines is in developing schemes to guarantee the shape, weight, orientation, and velocity of pipelined precursors. Nearly all will "fail gracefully" under some unplanned set of inputs conditions at any stage in the pipeline - in other words, give up and complain to a human operator to fix the problem.
The value of AI in robotics is that it can help plan motion in uncontrolled environments. This motion could be simple or complex, but most examples you'll see are simple. For industrial robotics, this might look like machine vision and simple actuators to adjust orientation of inputs, or automated optical inspection to check for defects on outputs. But the whole value of automation is improvements over human benchmarks on the metrics listed above, and given the choice between designing a general purpose robot or a highly specialized machine, the specialist almost always ends up simpler, cheaper, and better at what its designers want it to do.
Self-driving cars are one of a small handful of applications where the mechanics are straightforward, but the environment is chaotic. The moving parts are all outrageously simple, even for racecars: the wheels tilt to steer, the wheels roll to accelerate, the brakes clamp to decelerate. The mechanisms that make each of these motions happen have a century of engineering behind them, of which many decades have been spent enhancing reliability and robustness, optimizing cost, etc. The only "hard" problem is safely navigating the uncontrolled environment - which makes it a slam-dunk next-step, since the unsolved problem is the only problem that needs focus.
The average blue collar laborer is combining dozens of separate actuators along many degrees of freedom to perform thousands of unique complex motions over the course of a workday. I have no doubt that advances in AI could plan this kind of motion, given a suitable chassis - but the size and form factor of manufacturable actuators with power comparable to their human analogues are physically infeasible to compress into the shape of a standard human body. Take a look at the trends in motor characteristics for the past few decades, particularly figure 8 (torque vs weight) - neodymium magnets and solid state electronics made brushless DC motors feasible, which greatly improved the power density and efficiency, but only modestly enhanced the torque to weight ratio. At the end of the day, physics and material science puts limits on what you can manufacture, and what you can accomplish in a given volume. And the kinds of machines we can improve - mostly motors - have to translate their motions along many axes, adding more volume, weight, and cost. Comparatively, human muscle is an incredibly space-efficient, flexible linear actuator, and while we can scale up hydraulics and solenoids to much greater (bidirectional!) forces, this comes with a proportional increase in mass and volume. This actually isn't so bad for large muscles like arms and legs, but for hands (i.e. the thing we need to hold all the tools) there just aren't many practical solutions for the forces required on all the required degrees of freedom.
In terms of what could suddenly change the equation, I suppose there are a few things to watch out for:
My bet is on neither of these things happening any time soon. Basically every university in the world has an artificial hand or two under development, and they all suck. State of the art routinely costs six figures, weighs 5kg, and moves slow on 4x speed promo videos - it's been this way for decades and it isn't really getting better. Human hands enjoy a massive, durable nanomachinery advantage
More options
Context Copy link
More options
Context Copy link