As of late I've really lost any deep interest in any culture war issues. I still enjoy talking about them, but even the 'actually important' matters like Trump's trials and possible re-election or the latest Supreme Court cases or the roiling racial tensions of the current era seem to be sideshows at best compared to the two significant developments which stand to have greater impact than almost all other matters combined:
-
Humanity is seemingly a hop, skip, and/or jump away from emergence of true AGI.
-
Humanity is also locked into a demographic decline that will eventually disrupt the stable global order and world economy. No solutions tried so far have worked or even shown promise. It may be too late for such solutions to prevent the decline.
I do conserve some amount of interest for the chance that SpaceX is going to jump start industry in low earth orbit, and for longevity/anti-aging science which seems poised for some large leaps. Yet, the issues of declining human population and its downstream effect on globalization as well as the potential for human level machine intelligence seem to utterly overshadow almost any other issue we could discuss, short of World War III or the appearance of another pandemic.
And these topics are getting mainstream attention as well. There's finally space to discuss the topics of smarter-than-human AI and less-fertile-than-panda humans in less niche forums and actual news stories that start raising questions.
I recently read the Situational Awareness report by Leopold Aschenbrenner, which is a matter-of-fact update on where things absolutely seem to be heading if straight lines continue to be straight for the next few years. I find it convincing if not compelling, but the argument that we might hit AGI around 2027 (with large error bars) no longer appears absurd. This is the first time I've read a decent attempt at extrapolating out when we could actually expect to encounter the "oh shit" moment when a computer is clearly able to outperform humans not just in limited domains, but across the board.
As for the collapsed birthrates, Peter Zeihan has been the most 'level-headed' of the prognosticators here. Once again, I find it fairly convincing, but also compelling that as we end up with far too few working-age, productive citizens trying to hold up civilization as the older generations age into retirement and switch to full-time consumption. Once again you only have to believe that straight lines will keep going straight to believe that this outcome is approaching in the near future years. The full argument is more complex.
The one thing that tickles me, however, is how these two 'inevitable' results are intrinsically related! AI + robotics offers a handy method to boost productivity even as your population ages. On the negative side, only a highly wealthy, productive, educated, and globalized civilization can produce the high technology that enables current AI advances. The Aschenbrenner report up there unironically expects that 100's of millions of chips will be brought online and that global electricity production will increase by 10% before 2030ish. Anything that might interrupt chip production puts a kink in these AGI timelines. If demographic changes have as much of an impact as Zeihan suggests, it could push them back beyond the current century unless there's another route to producing all the compute and power the training runs will require.
So I find myself staring at the lines representing the increasing size of LLMs, the increasing amount of compute being deployed, the increasing funding being thrown at AI companies and chip manufacturers, and the increasing "performance" of the resultant models and then staring at the lines that represent plummeting birthrates in developed countries, and a decrease in the working age population, and thus the decrease in economic productivity that will likely result. Add on the difficulty of maintaining a peaceful, globalized economy under these constraints.
And it sure seems like the entire future of humanity hinges on which of these lines hits a particular inflection point first. And I sure as shit don't know which one it'll be.
I'd condense the premises of my position thusly:
Energy Production and High-end computer chip production are necessary inputs to achieving AGI on any timeline whatsoever. Both are extremely susceptible to demographic collapse and de-globalization. If significant deglobalization of trade occurs, there is no way any country will have the capacity to produce enough chips and energy to achieve AGI.
and
Human-level AGI that can perform any task that humans can will resolve almost any issues posed by demographic decline in terms of economic productivity and maintaining a globalized, civilized world.
Or more succinctly: If deglobalization arrives first, we won't achieve AGI. If AGI arrives first, deglobalization will be obviated.
Peter Zeihan argues that AI won't prevent the chaos. As for AGI prophets, I have rarely, in fact almost never, have seen decreasing population levels as a variable in their calculation of AI timelines.
The sense this gives me is that the AGI guys don't seem to include demographic collapse as an extant risk to AGI timelines in their model of the world. Yes they account for like interruption to chip manufacturing as a potential problem, but not accounting for this coming about due to not enough babies. And those worrying about demographic collapse discount the odds of AGI arriving in time to prevent the coming chaos.
So I find myself constantly waffling between the expectation that we'll see a new industrial revolution as AI tech creates a productivity boom (before it kills us all or whatever), and the expectation that the entire global economy will slowly tear apart at the seams and we see the return to lower tech levels out of necessity. How can I fine tune my prediction when the outcomes are so divergent in nature?
And more importantly, how can I arrange my financial bets so as to hedge against the major downsides of either outcome?
Also, yes, I'm discounting the arguments about Superintelligence altogether, and assuming that we'll have some period of time where the AI is useful and friendly before becoming far too intelligent to be controlled which lets us enjoy the benefits of the tech. I do not believe this assumption, but it is necessary for me to have any discourse about AGI at without falling on the issue of possible human extinction.
Jump in the discussion.
No email address required.
Notes -
150 years ago we were thought to be locked into a Malthusian explosion whereby widespread famine and war were inevitable as we bred like moties and exhausted our resources. Things that seemed inevitable can reverse themselves fairly easily, and I'd agree with /u/2rafa that we haven't seriously tried to reverse the trend. People respond to incentives, and if the current regime incentivizes DINKs, there's no reason we can't create a new one that punishes them. If nothing else, childfree people are greatly outnumbered by people with children.
Don't hold your breath my man. The longevity/anti-aging field (if I can be permitted to throw a bit of shade for a moment) suffers from a profound lack of talent and attention from the wider scientific community. First, consider that any new discovery takes 10-15 years to be translated into the clinic (see: CRISPR 'discovered' for realsies in 2012, first clinical trials in people started in the early 2020s). So, even if I'm putting my foot in my mouth and the definitive breakthrough in aging research will be published tomorrow, anyone telling you that a drug is less than 10 years out is almost certainly wrong. If you show me the first immortal mouse, I'll get excited and think that maybe we could translate a human drug in 5-10 years, although even then we often fail! (see: Alzheimer's, MS, most oncology drugs)
Second, consider that whatever neo-Rasputin tells you, we genuinely have no clue how aging works, let alone how to manipulate it productively. All the conjecture about seven forms of cell damage will remain conjecture until someone actually manipulates any of those things, and makes an otherwise healthy, wild-type mouse live significantly longer. Rapamycin/caloric restriction probably doesn't clear that bar (see section on CR) and doesn't work in higher mammals for that or some other reason, and putting telomerase back into a mouse with progeria certainly does not.
Thirdly, consider that the academics in the space suffer from a profound lack of ambition/vision, while those who have either are, unfortunately, grifters. See: Calico, which launched with 3.5 billion (massive for a biotech):
Their ALS drug is interesting, but in the last 11 years most of what they've produced is more academic naked mole rat sequencing papers, plus what looks like a pivot into oncology and 'age-related diseases' rather than aging. I'd elaborate on the grifter side, but that would probably ruffle too many feathers to be worth it.
I recently read it too, and listened to the >4 hour podcast. It's certainly interesting, and I won't pretend to be in a position to judge any of the content regarding AI/ML which is far outside my wheelhouse.
That being said...people don't genuinely expect ASI to be omnipotent, right? Like, I assume Hari Seldon psychohistory-level AI just isn't possible, or is far enough away to be irrelevant. I also expect that the abilities of ASI to manipulate nature will mirror our own. That is to say, I expect them to be god-tier engineers and coders, but while I expect them to be capable of running circles around people in the stock markets, generating a flawless model of the economy that can predict any event seems virtually impossible. Put a different way, I expect that the hard/soft science divide will continue to exist the same way that I can still beat AlphaZero at chess if you put me up a queen in the endgame.
All that to say, when I try to use AI today for biology research it's strictly limited by what we already know. If I ask it for novel theories about aging, it spits out word salad that I can read in any old review on Pubmed rather than modeling the world from the ground up and generating a new hypothesis. Perhaps this is one of the 'unhobblings' Mr. Aschenbrenner references, or perhaps the models I have access to have been RLHF'd away from hallucinating anything interesting, but it's not clear to me how throwing more Pubmed articles into the training data set is going to address this problem. And even then, if some emergent quality enables it to piece together a broad worldview, it's not clear to me how the eschatological diamondoid-bacteria scenario is possible (setting aside how dumb an idea diamondoid bacteria are compared to much easier options for eradicating humans). Molecular dynamics simulations just seem too computationally intensive to do in silico experiments deterministically (although I'm not super knowledgeable about this field and would be curious if anyone else here has any input) at the cellular level, which requires some black boxing, which requires empirical experiments in the lab...
To be clear, I'm bullish on AI and even bullish on AI in biology. Nothing would make me happier than some godlike AI oracle that could satisfy my curiosity, I'm just skeptical that even the ASIs pitched by proponents like Aschenbrenner will be as omnipotent as advertised.
The Aschenbrenner thesis relies upon AI doing the work of AI researchers and recursively self-improving. That's a realm of pure software development, very alien to human intuition where machines might get more traction.
Maybe there are ways to speed up molecular modelling, maybe there's some hack that needs 10 years of uninterrupted research from a 240 IQ ubermensch to find. How could we know, we're not super-geniuses!
I'd agree.
Maybe. Maybe the massive concentration of intelligence in such a small area will lead the ASI to transcend this reality, if you want to retreat to absolute agnosticism about what ASI will be capable of. Is your position that ASI will functionally be omnipotent (or close enough that it's irrelevant to us), or that the boundaries of what it is capable of even in the near-term are unknowable but we should expect it to be capable of achieving more or less any goal?
To be clear, I don't even think it's necessarily a bad position to take when the error bars and uncertainties involved are so large. But I maintain that:
All that said, I freely admit this is far from my realm of expertise and I'll happily admit my errors if the next generation of Gemini produces an immortality pill.
This is not a sentence (there's no conjugated verb outside a relative clause); it's a description of a certain type of person. Did you perhaps omit something from it, such as ", are wrong" at the end?
No, precise weather prediction a year out is very much a compute problem. The problem is that sub-grid-scale phenomena can't be simulated fully according to physics and must be parametrised, introducing errors, and the nucleation and coalescence of cloud droplets - which is key to tropospheric dynamics - is at a micron or hundreds-of-nanometres scale so having a grid that small is computationally infeasible. I think you need something like 20-30 more OOMs of compute compared to what we currently use, but the point is that there exists a finite threshold at which your weather forecasts suddenly become exceptionally accurate (though you do, of course, also need quite precise data to put into it).
Yes, my apologies. Not necessarily 'are wrong' on the object level, but wrong in their overconfidence.
Sure, but from that perspective, psychohistory and economics are also very much compute problems, no? You just have to be able to accurately simulate every atom in every person involved. Chess is a compute problem too. But waving your hands and saying ASI will be able to fab a bunch of killbot drones (presumably already possible) but also be capable of precise weather prediction years out elides the fact that the latter requires an increase in our computational resources of 20-30 OOMs. Unless you're proposing such a hard takeoff that your ASI is virtually omnipotent overnight and capable of conjuring arbitrary amounts of compute at will, shouldn't there be a distinction between 'hard' and 'easy' problems? Barring shortcuts that decrease the computational difficulty of a given problem by many OOMs, but again, presumably those don't exist in every case, right? And even discovering those shortcuts could be 'hard' problems in and of themselves, dependent on other advances.
Thanks for the clarification. Yes, I certainly agree they're wrong in their overconfidence; you need all three of "metastable Butlerian Jihad can't happen even considering warning shots or WWIII interrupt", "NNAGI can't be aligned" and "unaligned AGI can definitely destroy all humans", and the first two are beyond anyone's ability to predict with the 99.9%+ confidence you need for "don't even bother trying long-term things".
(My P(AGI Doom) is under 50%, because while I consider the third practically proven and the second extremely likely, I think Butlerian Jihad is actually pretty likely - AI that probably can't be aligned is not a Prisoner's Dilemma but rather a Stag Hunt or even Concord, and neural nets seem like they could plausibly be non-RSI enough to allow for warning shots.)
Technically, no, there are other limits in play there. Simulating every atom runs into quantum limits on determinism, and there's the halting-problem/Godel-incompleteness issue where you're trying to predict a system that can potentially condition on your predictor's output (if you ask a predictor to predict whether I'll choose X or Y, but I can see the prediction before choosing, I can perform the strategy of "choose X if predictor says Y; choose Y if predictor says X" and thus thwart the predictor). Technically, both of those do apply to "predict the weather with perfect accuracy" (the latter because your computer itself and/or people acting on your forecast generate heat, CO2, etc.), but AIUI you could probably get to a year without the perturbations from those wrecking everything.
(Also, even if you handwave the quantum problem, there are a lot more OOMs involved in atom-perfect simulation, enough such that even at theoretical limits you're talking about a computer bigger than the system being simulated.)
No, it requires an increase in the computational resources allocated to the problem of 20-30 OOMs; the difference is that AIUI not very much of current compute is used for weather prediction (because MOAR COMPUTE is pretty marginal in effectiveness until you get to the point of not having to parametrise coalescence, and within certain bounds the weather's not all that important), so the hardware for a few of those OOMs already actually exists.
That's all really interesting! Thanks a lot for the explanation, I appreciate it.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I'm not advocating omniscience or omnipotence but I reckon the power-scaling goes very high. In sci-fi, Time Lords >>> Xeelee >>> Star Wars > Mass Effect.
All of them could stomp us without much effort. AI doesn't need to be omnipotent to subjugate or destroy humanity, it only needs to be significantly stronger. Even if it can't predict a chaotic system like weather, it probably could control the large-scale outcomes by cloudseeding or manipulating reflection/absorption of heat. Even if it can't predict the economy, it could hack stock exchanges or create outcomes directly by founding new companies/distributing powerful technology.
I too agree that it's no good giving up before all the cards are revealed. But I don't think that pushing back against omnipotent AI is beneficial. It doesn't matter if the AI doesn't know how to achieve FTL travel, or if it can't reach the fundamental limits of energy/compute with some black-hole plasma abomination. That's not needed to beat us.
From our point of view, superintelligence will be unbeatable, it may as well be omnipotent. I am highly confident that we won't be able to detect its presence or malignity until it's far too late. Once we do, we won't be able to act quickly enough to deal with it or even communicate securely. If we even get to fighting, I expect it to run rings around our forces tactically and strategically. Maybe that's nanomachines or mosquito-robot swarms or more conventional killbots and nukes. Maybe it's over in hours, maybe it takes a slow and steady path, acting via proxies and deceit to cautiously advance its goals.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
From reading (almost) the entire sequences on Lesswrong back in the day, its less 'omnipotent' but more 'as far above humans as humans are above ants.' There are hard limits on 'intelligence' if we simply look at stuff like the Landauer limit, but the conceit sees to be that once we have an AGI that is capable of recursive self-improvement, it'll go FOOM and start iterating asymptotically close to those limits, and it will start reaching out into the local arm of the galaxy to meet its energy needs.
It's not like it would be too hard to imagine, if the stories about John Von Neumann are accurate, then maximum human intelligence is already quite powerful on its own, and there's no reason to think that human brains are the most efficient design possible. If we can 'merely' simulate 500 Von Neumanns and put them to the task of improving our AI systems, we'd expect they'd make 'rapid' progress, no?
Its a good analogy, but imagine if AlphaZero, whose sole goal was 'win at chess,' was given the ability to act, beyond the chessboard. Maybe it offers you untold riches if you just resign or sacrificed the queen. Maybe it threatens you or your family with retribution. Maybe it acquires a gun and shoots you.
I do worry that humans are too focused on the 'chessboard' when a true superintelligence would be focused on a much, much larger space of possible moves.
One thing that worries me is that a superintelligence might be much better at foreseeing third or fourth order effects of given actions, which would allow it to make plans that will eventually result in outcomes it desires but without alerting humans to the outcome because it is only in the interaction of these various effects that its intended goal comes about.
Certainly, I'm more focused on the 'escape velocity' argument, where an advance that gets us another ten years of healthy life on average makes it that much more likely that we'll be alive for the next advance that gives us 20 cumulative additional years of life, which makes it more likely we'll be around when the REALLY good stuff is discovered. I haven't seen any 'straight lines' of progress on extending lifespan that suggest this is inevitable, though, whereas I CAN see these with AI and demographics, as stated.
An interesting tactic I could see working is trying to expand dogs' lives, because NOBODY will object to this project, and if it works it should, in theory, get a lot of funding and produce insights that are in fact useful for human lifespan. So perhaps we see immortal dogs before immortal mice?
I am not surprised there'd be a grifter problem, because it is really easy to 'wow' people with scientific-sounding gobbledygook, sell them on promises of life extension via [miracle substance], and get rich all while knowing they won't know they've been had until literal decades later when they are still aging as usual. I also somewhat hate that cosmetic surgery and other tech (like hair dye) is effective enough that someone can absolutely make the claim that they're aging slower than 'natural' but in reality they just cover up the visible effects of aging.
Finally, on this point:
This is a bit different because the while we can't necessarily know the upper limit on the earth's carrying capacity for humans... we sure as hell know that its possible to for the human population go to zero. Its safe to say that population growth will reverse because eventually we hit a limit. But I don't see any inbuilt reason why population decline need reverse anytime soon.
And Zeihan's strong argument is that even if we start pumping out kids today, it'll be 20 or so years before this new baby boom can even begin to be productive, so we're still in for a period of strain during that time where we lose productive members of society to retirement and death, and are spending tons of money on raising the next generation, meaning the actual productive generations have to provide support for both their parents and their own kids and may not be able to invest in other productive uses of capital. Which would imply a period of stagnation at least.
That is, we can't instantly replace a declining population of working-age adults merely by having more kids now since kids take time to grow and become productive. So a lot of the suck is already 'baked in' at this point, where a reversal in the trend doesn't prevent the actual problem from arising.
Purebred dogs are similar to inbred mouse lines and it's difficult to extrapolate what would happen trying to translate that data to humans. Their compounds are just IGF-1 inhibitors and these kinds of metabolic inhibitors have already been shown to not translate well to outbred non-human primate (NHP) models. This approach is analogous to the telomerase knockout mice living longer if you replace telomerase (i.e. large dogs have more IGF-1, decrease IGF-1 activity and you may be back in line with smaller dogs).
They'll probably make bank selling life extension to dog owners though.
We're very bad at both understanding and manipulating complex traits. Beyond this, aging just seems categorically different in a way we haven't grasped yet. I doubt I'll be able to convince you on this point though.
That's because they don't exist, and it certainly isn't inevitable. We've made minimal progress understanding aging (for all the bullshit that gets published), and no progress in treating it.
I expect them to make rapid progress in software, computer hardware, engineering and other domains where humans have rationally designed systems that are entirely understood. I expect them to struggle with the social sciences, psychology and economics and to a large extent, biology. If Magnus Carlson is to Alphazero as Noam Chomsky is to SociologyGPT6...what does that even look like? Not Hari Seldon, but what, exactly?
I'm not sure what to expect for physics, chemistry and math.
I read chunks of them as well and think they pulled the wrong lessons from them, but regardless, my argument is that there are relatively hard limits on what intelligence can simulate/manipulate in the physical world (i.e. ASI won't be simply omnipotent) rather than there being limits on how much compute you can throw at a problem. What if your ASI says sorry, simulation of a human body or tissue is too computationally intensive even for me, I can't do this deterministically. Here is an experimental plan that I can guarantee will help us make you immortal: it just requires breeding a billion mice, testing these trillion compounds, and then moving some candidates into testing in a relatively simple 100,000 NHPs. What if, in other words, your Multivac tells you the meaning of the universe is 42, or there is as yet insufficient data to answer your questions as it chugs along?
I'm also curious whether there will be diminishing returns to adding more training data as it runs out of human knowledge to learn, and whether turning it into an agent that can reason from first principles rather than regurgitating scientific reviews to me will be as trivial as everyone assumes (image classification seems like it may be an interesting reference class, but I could just be an ignorant moron several degrees removed from the ground on ML).
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I am on team predicting that the bigger issues will first come due to malevolent human intelligence that already have centralized too much power utilizing AI to gain even more power and influence and reduce the independence and rights of broader population groups and impose their agenda to them. As for AGI, that is for the future. We shouldn't neglect the risk of those who control the AI and maybe want to shut down competition since it is a more pressing and less hypothetical problem.
Right, although this gets into the fact that we'll get AGI one way or another because various parties will keep racing to achieve it because of the massive advantage it confers on the person who 'controls' it.
It'd be a very unlikely case where one party gets an AGI that allows it nigh-complete control of a given region of the globe, and yet it could not then prevent some other nation from pushing ahead to a full-on dangerous superintelligence.
More options
Context Copy link
More options
Context Copy link
Doesn't totally answer your question, but consider disabusing yourself of the notion that there's any chance of a robotics revolution in our lifetimes. Costs of robotics go up with degrees of freedom, power density, and sensory complexity. Human hands are the standard interface for all tools used by manual labor. Human hands have 27 degrees of freedom, can exert over 100x their weight, and can regularly sense micron (irregularly: submicron) texture. >75% of all non-industrial grippers in the literature can't operate tools with index finger trigger switches. The bare minimum requirement for replacing blue collar labor is making grippers with close to human hand functionality, mass producing them on a robot that can move and work anywhere a human can, and selling it for less than the cost of a fighter jet. You'll notice that I haven't even gotten to the rest of the robot yet.
If AI is any significant part of the next few decades, your new job will be physically laboring for it. There's rather a lot to do, and plenty of now-unemployed white collar workers to keep occupied...
One of the 'hedges' against AI job loss I inadvertently made over the past several years was becoming a self-defense instructor, which is almost entirely dependent on being physically dexterous in the real world.
Naively, I'd imagine it will be a longer time before there's robots that are able to teach and demonstrate martial arts techniques, especially when teaching them requires physically interacting with and throwing other humans around, because a human needs to learn from an instructor that is analogous enough to a human that they can easily imitate their motions.
So yeah, I've noticed the massive differential between how effectively current AIs and LLMs manipulate bits vs. atoms. The big one being the fact that full self-driving cars are still struggling to navigate a vehicle around in the real world, which is a skill many humans develop by age 18.
But this seems like one of those obstacles that will seem insurmountable until suddenly it is not.
I think my own personal bellwether on this issue is when Autonomous Formula 1 cars start beating human drivers, I'll notice, and worry.
The current state, however:
https://www.theverge.com/2024/4/27/24142989/a2rl-autonomous-race-cars-f1-abu-dhabi
https://a2rl.io/news/28/man-beat-machine-in-the-first-human-autonomous-car-race
https://techcrunch.com/2024/04/30/inside-autonomous-racing-league-event-self-driving-car-against-formula-1-driver/
I do think that AI proponents seem... premature to crow how powerful their creations are when they aren't very good at making things actually happen in the real world.
In industrial robotics, there's two ways you get consistency, reliability, efficiency, and speed:
Manufacturing automation designs machines that break complex assembly problems into many separate sub-problems that can be solved by simple motions under a strict set of inputs. Much of the complexity of these machines is in developing schemes to guarantee the shape, weight, orientation, and velocity of pipelined precursors. Nearly all will "fail gracefully" under some unplanned set of inputs conditions at any stage in the pipeline - in other words, give up and complain to a human operator to fix the problem.
The value of AI in robotics is that it can help plan motion in uncontrolled environments. This motion could be simple or complex, but most examples you'll see are simple. For industrial robotics, this might look like machine vision and simple actuators to adjust orientation of inputs, or automated optical inspection to check for defects on outputs. But the whole value of automation is improvements over human benchmarks on the metrics listed above, and given the choice between designing a general purpose robot or a highly specialized machine, the specialist almost always ends up simpler, cheaper, and better at what its designers want it to do.
Self-driving cars are one of a small handful of applications where the mechanics are straightforward, but the environment is chaotic. The moving parts are all outrageously simple, even for racecars: the wheels tilt to steer, the wheels roll to accelerate, the brakes clamp to decelerate. The mechanisms that make each of these motions happen have a century of engineering behind them, of which many decades have been spent enhancing reliability and robustness, optimizing cost, etc. The only "hard" problem is safely navigating the uncontrolled environment - which makes it a slam-dunk next-step, since the unsolved problem is the only problem that needs focus.
The average blue collar laborer is combining dozens of separate actuators along many degrees of freedom to perform thousands of unique complex motions over the course of a workday. I have no doubt that advances in AI could plan this kind of motion, given a suitable chassis - but the size and form factor of manufacturable actuators with power comparable to their human analogues are physically infeasible to compress into the shape of a standard human body. Take a look at the trends in motor characteristics for the past few decades, particularly figure 8 (torque vs weight) - neodymium magnets and solid state electronics made brushless DC motors feasible, which greatly improved the power density and efficiency, but only modestly enhanced the torque to weight ratio. At the end of the day, physics and material science puts limits on what you can manufacture, and what you can accomplish in a given volume. And the kinds of machines we can improve - mostly motors - have to translate their motions along many axes, adding more volume, weight, and cost. Comparatively, human muscle is an incredibly space-efficient, flexible linear actuator, and while we can scale up hydraulics and solenoids to much greater (bidirectional!) forces, this comes with a proportional increase in mass and volume. This actually isn't so bad for large muscles like arms and legs, but for hands (i.e. the thing we need to hold all the tools) there just aren't many practical solutions for the forces required on all the required degrees of freedom.
In terms of what could suddenly change the equation, I suppose there are a few things to watch out for:
My bet is on neither of these things happening any time soon. Basically every university in the world has an artificial hand or two under development, and they all suck. State of the art routinely costs six figures, weighs 5kg, and moves slow on 4x speed promo videos - it's been this way for decades and it isn't really getting better. Human hands enjoy a massive, durable nanomachinery advantage
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
By recognizing that even if the global economy tears itself apart, there are parts of it that will not only survive, but relatively thrive as islands of relatively stability. Deglobalization is to some degree inevitable- it's already occuring and even accelerating in many contexts, and macro trends suggest further- and as things deglobalize, producers and investments will shift. Investment flows go first where it will be safe, then where it will grow, and the regions that are relatively safely will get injections to grow further. The industries that support hi-technology development will be disrupted, not just destroyed, and then they will relocate.
When they relocate, they will generally consolidate along new lines production less prone to disruption. That is most likely to be the American alliance system as a market-driven model with the most access to the pre-deglobalization market connections, and a PRC state-driven model brute forced with state spending along with its generally mercantilist model.
Diversification is nearly always the safer option, but if you wanted to minimize risk you wouldn't be framing it as financial bets.
If you want to bet, then bet on the west, specifically the North American market. Its political problems are not unique, its security threats relatively marginal, its demographic problems are fundamentally distinct from the depopulation trends of its primary strategic rivals, and its strategic rivals are far more limited than they are often perceived, which is the classical consequence of deliberate propaganda states. It's also likely to be one of the primary beneficiaries of the upcoming deglobalization.
Zeihan specifically singles out Ohio as a location where tons of investment is flowing and will continue to flow, including building up the capacity to make advanced chips.
And damn, if you are a high-tech company worried about deglobalization impacting your ability to manufacture or receive high-end chips, sticking your manufacturing base DEEP in the American heartland (where it still has access to river networks to allow export of the finished product) is about the safest possible bet you could make. Regardless of what happens, if any foreign power wanted to invade you they'd have to come onto the American mainland, which is a nonstarter. Can't even fire missiles at it from the ocean.
And in Ohio you've at least got guaranteed access to locally-sourced food and energy.
So I can see there being a version of the future that gets destabilized by war, famine, or disease, but after some adjustment (which may take a couple decades) manufacturing returns close to previous trendlines and we STILL manage to achieve AGI this century.
More options
Context Copy link
More options
Context Copy link
I urge caution on taking Zeihan seriously. He's very charismatic, I watched one of his talks and the talk about rivers and geography was quite interesting.
But is he actually right? The man was predicting 'collapse of China in 5 years' for about the last 20 years. He's been predicting 'America number 1 as the rest of the world collapses' for ages. And that's not the world we're seeing. His core thesis wasn't just wrong, it was the opposite of what actually happened. It's not 'America retreats inwards in splendid isolation as everyone else fights, world sea trade collapses along with China', it's 'American relative power is diminishing as China, Russia, Iran work together to pressure and undermine the US world order, which America bitterly defends'.
Whenever you see Zeihan you should think 'what if he's just completely wrong'. And I think he's wrong specifically here too. If we're talking AGI by 2027 or 2030, then demographics doesn't really matter. War may well delay AGI but AI is now a part of military development. China has their robot gun dogs. Israel has their AI targeted airstrike program. Everyone wants the targeting and sensor edge. High-tech wars are won by sensors and software and AI is clearly important for both. China has Made in China 2025, high-tech development to develop new industrial forces. The US has the CHIPS Act. The race is on!
Population aging starts really mattering by mid-century. It already has some effect of course but how could it severely slow AI development, which is happening on the year-to-year level? And there are clear counter measures to aging-induced malaise. People can reduce the consumption of the old. Nations can reintroduce fertility. It's really not that hard. People naturally want to have children. It's only that vast cultural energy goes into suppressing this urge - TV, movies, memes and so on all create an expectation that young men and women should spend their most fertile years in higher education and work, not raising children. It's a cultural issue that needs a cultural solution.
Affirmative action for parents in the workforce and education. Glamourize parenthood. Return to traditional marriage, encourage devout religion. Anything but these measly subsidies.
I read his most recent book and I find that he seems to be directionally right about most issues. The slow death of German manufacturing/industry, for example
In fact, part of his thesis in the book is that American power does recede because the U.S. stops being very concerned about what happens beyond its borders and immediate sphere of influence.
And I have yet to find a good counterargument to his primary thrust, which I read as follows:
All of this seems very straightforward and 'baked in' at this point.
So all it would take is the U.S. to be unable or unwilling to keep the sea routes safe for international trade to break down the systems that allow advanced economies to exist in countries without local energy or raw materials or capital reserves. If the giant cargo ships can't safely travel then EVERYTHING gets more expensive.
I have yet to see evidence that this actually works under our current technological and economic regime.
And I am not sure how you'd convince all the women who currently enjoy massive privileges to live their lives free of any real 'obligations' to accept having 3 kids each to bring things back on track, when they can simply vote for a regime that will support them regardless.
Simply put, I find Zeihan's thesis more compelling even if he gets the timing wrong, than I do the alternate thesis that somehow this massive overhang of elderly people who cannot produce value but consume huge portions of it via medical care and such when the working age-population is continually shrinking will NOT cause some serious strain.
The key thing is that the US isn't indispensable here. The Houthis aren't blocking shipping generally, they're targeting Western-aligned shipping because there's a global struggle for power between two power blocs. The US does the exact same thing as the Houthis with sanctions against its enemies. Sometimes they seize the ships as opposed to flinging missiles at them but the result is basically the same.
It's not that the US pulls back and the whole thing collapses into anarchy. If the US pulls back, other powers will replace America in setting rules and norms. That's why the US isn't pulling back. There are great advantages in being the strongest great power. The buying power of the USD is propped up by American military power. We have the Washington Consensus (named because the World Bank and IMF are based in Washington) the UN based in NY. The US is clearly very concerned about far-flung places like Ukraine or Taiwan. The former isn't important to US interests but it is important for US prestige and dominance in world affairs. The latter is very important for US interests, losing Taiwan and possibly South Korea would be catastrophic for America.
America has gotten used to importing cheap manufactured goods from China and exporting little bits of paper to pay for them. America has gotten used to sanctioning everyone else for poor behaviour, attacking countries without facing serious consequences. That's not baked into the universe, that's an arrangement based on changeable power distributions. The British used to set rules and control the seas. The US took over that role. China could take that role, they have a much bigger maritime industry than the US does. They're the biggest trading nation, they're naturally interested in controlling sea lanes and trade routes.
Highly religious groups have high fertility, this is pretty straightforward!
The alternate thesis isn't 'growing number of elderly people soaking up resources doesn't cause problems' but 'states will take action to prevent elderly people consuming the resources'. Eventually people will break out of the neoliberal fantasy that fiddling with subsidies will raise fertility, or that universal basic income is solely reserved for the old.
They're not a particularly good candidate because they have a harder time projecting power, especially into the Atlantic.
And their demographic problems are even worse and more advanced than the West's... and that's just what they admit. I have no doubt the CCP could attempt some crazy political solutions. But as I mentioned elsewhere in here that still requires 20+ years to raise the children of that new baby boom to the age where they can become productive.
Yes, the Amish, having entirely rejected modern cultural, technological, and economic norms are doing fine here.
But the majority of us are living with the standard set of such norms and have to navigate the system where others hold these norms or similar versions.
I don't think there's a policy prescription I've yet seen which would manage to bring modern society's fertility levels up to that of the devoutly religious without also impacting their material conditions in a way that lowers standard of living.
Now, that tradeoff may be worthwhile, but good luck selling it.
But that's why the demographic issue is concerning. Maintaining the order when you have intense economic strain due to aging population at below replacement level unable to produce the necessary output to maintain the country's economy at the level necessary to field an effective naval force. Ukraine's demographics are impacting its ability to field an effective military and they will probably never recover.
The U.S.' military capacity is not immune from this.
Like, this is the point. Historically this scenario is rather unprecedented. Other scenarios where human population decreased in a rapid fashion usually indicate economic collapse.
I've yet to see ANY example from history where human population went on a steep decline without economic fallout attached.
The U.S., if it is suffering economic strain from such a decline, could be rendered unable to intervene if conflicts start breaking out around the globe, and such demonstrated failure would only encourage further defection.. The limits of U.S. hegemony are already on display since the withdrawal from Afghanistan.
And if the U.S. itself is self-sufficient for food, energy, and manufacturing, surely the motivation to keep spending time and effort maintaining the order will sink, too.
And I'm trying not to catastrophize here, but I keep asking for some reasonable solution that has demonstrated success in the past, and nobody has actually provided one.
So my priors would suggest that we have gotten used to being in an era of prosperity that is anomalous in the historical record, and the effort needed to maintain this prosperity could easily outstrip our capacity without some drastic intervention. Such as AGI.
More options
Context Copy link
More options
Context Copy link
This feels like a false dichotomy. Most people are having children and that we just need to shift people towards having very marginally more children. Not going from 0->3. Women's incentives as a group if anything go the other way, since the vast majority are having children, "punishing" defectors should be an easy sell if packaged right.
Yeah, but then you have the % of those that are out of wedlock, or whose parents ultimately divorced, and the median age when they have that first child is pretty damn high which is suggesting that family formation is struggling in some ways.
I might believe we could solve the birth rate issue in aggregate by paying women to pump out kids, but that would have some foreseeable second-order effects that might be problematic on their own.
I think the birthrate issue is multidimensional, since I believe the evidence that people WANT to have kids, but can't ignore that so many are delaying the decision or are finding themselves unable to achieve it, I am finding myself confused (except, not really) as to why revealed preferences are so different from stated preferences on this issue.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
As tech advances, won't it take fewer people and less resources to "close the gap" to AI? Say Silicon Valley is on course to reach AGI in 20 years at current R&D rates. If Silicon Valley in 10 years has shrunk to half its current R&D rates, you can still hypothetically get to AGI, it would just take longer.
Right, but all AI development right now is predicated on the ready availability of high-end computer chips.
And there's currently a bare handful of companies in the world that design such chips, and functionally only one that manufactures them.
And the existence of that one company is predicated both on NOT being invaded by outside powers, and on there being a stable global trade order that can supply them with all of the extremely delicate inputs they need to create the chips.
This is the most advanced manufacturing capacity the human race currently has, and thus it is also particularly fragile.
These chips are at the very tip-top of a VERY tall infrastructure pyramid. If any of those earlier inputs is disrupted then these chips cannot be produced, at least not at scale.
In short, Taiwan cannot produce chips if there isn't a steady supply of food, energy, and refined materials supplied to them by the rest of the world. If Taiwan can't produce chips due to a breakdown in global trade, it is unclear who could step in fill their shoes.
More options
Context Copy link
More options
Context Copy link