site banner

Small-Scale Question Sunday for January 12, 2025

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

3
Jump in the discussion.

No email address required.

I have to disagree. All of this is ridiculous optimism. You can’t describe the mechanisms, the technology, the research routes. “AI will figure it out”. We’re yet to even figure out whether AI trained on human reasoning can get much smarter than us (collectively). AI could automate 95% of human labor and still not even come close to reasonably extending the lifespan of affluent people in rich countries (presumably automated abundance would have a larger impact on the global lifespan, but I’m not talking about that). This is a particularly strange form of AI hyperoptimism (which even I, someone pretty e/acc, balk at) wherein the technology is essentially magic and all we need is a sufficiently advanced LLM and it will literally be able to derive, deduct and synthesize the sum of human knowledge to suddenly find mountains of undiscovered low hanging fruit that no human being or team of researchers, scientists or capitalists ever even imagined, that will likely turn out to be as simple as some kind of cheap novel cocktail of existing drugs and supplements that we just didn’t realize was actually the key to eternity.

First, the "we can't describe the mechanisms" argument is peculiar. We couldn't describe the mechanisms of most breakthrough technologies before they existed. In 1900, you couldn't have described how digital computers would work. In 1950, you couldn't have detailed how CRISPR gene editing would function. The inability to specify exact mechanisms in advance isn't evidence against feasibility.

But more importantly, we do know many of the mechanisms of aging. We have the Hallmarks of Aging framework. We understand telomere attrition, mitochondrial dysfunction, cellular senescence, stem cell exhaustion, and epigenetic alterations. What we lack isn't theoretical understanding - it's the engineering capability to intervene effectively at scale.

If there's an AI winter around, it hasn't gotten particularly chilly yet. We can still get improved performance by throwing more compute and data at the problem. Most strikingly, the use of large amounts of synthetic data hasn't caused mode collapse, so we're already bootstrapping.

I think the economy hasn't even digested the full consequences of GPT-4, let alone more recent models. o1 and o3 might be remarkably expensive at the moment (which may not last given the OOMs of cost reductions each model to date seems to experience within its lifetime), but it also demonstrates performance that, for more taxing problems, is worth the expense.

We’re yet to even figure out whether AI trained on human reasoning can get much smarter than us (collectively).

Take a moment to consider deeply what it even means to be asking that question. Implicitly, you seem to acknowledge that a given model can outperform most individual humans, and often in their core domains to boot. So now the goal-post has moved, and is moving fast enough to achieve escape velocity itself.

15 years back, getting an AI model to identify a picture had a bird in it was stunning. (Insert relevant XKCD).

We're also in the middle of a Renaissance in industrial robotics, so it's not like our models are stuck as disembodied yogis either.

Even if AGI only improved modestly, what do you think the implications of having an entity capable of doing knowledge work for far less than minimum wage 24/7 are? Mass unemployment, and probably a lot of economic growth. At the bare minimum, the latter means more money and resources to throw at problems we care about, even the ones we don't seem to care about as much as we should.

Intelligence is powerful. We are still making AI more intelligent, and it's already at the point where it can solve PhD math problems and Terence Tao thinks it's a mediocre grad student (mediocre in the eyes of arguably the most accomplished modern mathematician), and that was a statement on an older model to boot.

This is a particularly strange form of AI hyperoptimism (which even I, someone pretty e/acc, balk at) wherein the technology is essentially magic and all we need is a sufficiently advanced LLM and it will literally be able to derive, deduct and synthesize the sum of human knowledge to suddenly find mountains of undiscovered low hanging fruit that no human being or team of researchers, scientists or capitalists ever even imagined, that will likely turn out to be as simple as some kind of cheap novel cocktail of existing drugs and supplements that we just didn’t realize was actually the key to eternity

Sufficiently advanced technology is indistinguishable from magic. We're landing skyscrapers on their tails after circling the globe, most accounts of magic pale in comparison. Even fiction acknowledges "hard" vs "soft" magic systems, with the former being bounded and limited by clearly acknowledged laws, and the latter doing whatever the author feels like today. I am positing, with reasonable confidence, that even ASI is limited by physics. The world today has more Witchcraft and Wizardry than dreamt by anyone burned at the stake or those doing the burning.

What unlocks more technology? Intelligence. What are we scaling up? Intelligence.

It would be more surprising if there were literally no low-hanging fruit. We make advancements every year that turn out to arise from the implications of research and data collected decades ago, but where nobody connected the dots till much later. The Efficient Market Hypothesis is not actually, literally true, and there is absolutely no analogue for the Marketplace of Ideas.

that will likely turn out to be as simple as some kind of cheap novel cocktail of existing drugs and supplements that we just didn’t realize was actually the key to eternity.

You're arguing with a strawman here. I make no such claims. It might well turn out that reversing aging is incredibly expensive and time consuming even with Singularity tech (even if I think that's unlikely, I can't rule it out). If you told Turing that instantiating the machine god required etching quadrillions of runes on a few inches of silicon, he might balk at that ever happening, not having the luxury of knowing that Moore's law was around the corner. Besides, things that might be disconcertingly expensive for us might well not be so to a much richer and more advanced society.

We could start work on a Dyson Swarm today. It's not particularly hard to build a solar panel and put it in solar orbit. We might even create replicators that speed up the process (humans are Von-Neumann replicators after all), but it doesn't take much of a leap in logic to think that AGI might let us do that far quicker than tens of thousands of years.

Even if AGI only improved modestly, what do you think the implications of having an entity capable of doing knowledge work for far less than minimum wage 24/7 are? Mass unemployment, and probably a lot of economic growth.

Side point: have you come around to expecting universal basic income, then?

We could start work on a Dyson Swarm today.

Sure, but this is exactly the issue with what you say when you say:

We understand telomere attrition, mitochondrial dysfunction, cellular senescence, stem cell exhaustion, and epigenetic alterations. What we lack isn't theoretical understanding - it's the engineering capability to intervene effectively at scale.

“We understand the basics of how to terraform Mars to make it habitable to humans, and have done since the 1950s, probably before. What we lack isn’t theoretical understanding, it’s the engineering capability to intervene effectively at scale” is indeed a statement that makes complete sense. We don’t know yet, but reversing aging could easily be a ‘terraform Mars’ level problem.

Side point: have you come around to expecting universal basic income, then?

Expecting it if the Powers That Be are benevolent enough to want to maintain or improve the standard of living of the billions of people made obsolete? It seems like a necessity, since I consider it unlikely that baseline humans can be augmented to be be competitive with AGI without massive subsidies, and the end result will likely be indistinguishable (I don't necessarily consider this a bad outcome).

Probably true, but not reliably so, and there might well be a period of severe pain along the way. It's well worth preparing for the worst case scenario that isn't just instant death.

“We understand the basics of how to terraform Mars to make it habitable to humans, and have done since the 1950s, probably before. What we lack isn’t theoretical understanding, it’s the engineering capability to intervene effectively at scale” is indeed a statement that makes complete sense. We don’t know yet, but reversing aging could easily be a ‘terraform Mars’ level problem.

"Extremely difficult problems" encompasses a range of difficulties that extend all the way till literally impossible. I think solving aging is a $200 billion and twenty years problem (give or take a hundred billion or a decade) whereas terraforming Mars is, by most estimates, a $several trillion and a century problem.

I would be rather surprised if we didn't end up with anti-aging by 2050, and the majority of the probability mass I'd expect to assign would be in things like WW3, societal collapse or AI x-risk. In other words, I expect that dying from old age is unlikely for us, and if we do die, it's because something else got us first.

If you google Bryan Johnson, you'll discover a very wealthy guy who turns his whole life into a mission. The goal of the mission is extending the mission. He eats seeds, berries, and protein compounds, all before like midday, then nothing, injects himself with various substances, sleeps a lot, takes various supplements (until he stops taking them), has weird waxy skin, and declares that he isn't going to die. I'd rather be me.