The most common response to "AI took my job" is "don't worry, soon the AI will take everyone's jobs, then we'll all have UBI and we won't have to work anymore." The basic thesis is that after the advent of AGI, we will enter a post-scarcity era. But, we still have to deal with the fact that we live on a planet with a finite amount of space and a finite number of physical resources, so it's hard to see how we could ever get to true post-scarcity. Why don't more people bring this up? Has anyone written about this before?
Let's say we're living in the post-scarcity era and I want a Playstation 5. Machines do all the work now, so it should be a simple matter of going to the nearest AI terminal and asking it to whip me up a Playstation 5, right? But what if I ask for BB(15) Playstation 5's? That's going to be a problem, because the machine could work until the heat death of the universe and still not complete the request. I don't even have to ask for an impossibly large number - I could just ask for a smaller but still very large number, one that is in principle achievable but will tie up most of the earth's manufacturing capacity for several decades. Obviously, if there are no limits on what a person can ask for, then the system will be highly vulnerable to abuse from bad actors who just want to watch the world burn. Even disregarding malicious attacks, the abundance of free goods will encourage people to reproduce more, which will put more and more strain on the planet's ability to provide.
This leads us into the idea that a sort of command economy will be required - post-scarcity with an asterisk. Yes, you don't have to work anymore, but in exchange, there will have to be a centralized authority that will set rules on what you can get, and in what amounts, and when. Historically, command economies haven't worked out too well. They're ripe for political abuse and tend to serve the interests of the people who actually get to issue the commands.
I suppose the response to this is that the AI will decide how to allocate resources to everyone. Its decisions will be final and non-negotiable, and we will have to trust that it is wise and ethical. I'm not actually sure if such a thing is possible, though. Global resource distribution may simply remain a computationally intractable problem into the far future, in which case we would end up with a hybrid system where we would still have humans at the top, distributing the spoils of AI labor to the unwashed masses. I'm not sure if this is better or worse than a system where the AI was the sole arbiter of all decisions. I would prefer not to live in either world.
I don't put much stock in the idea that a superhuman AI will figure out how to permanently solve all problems of resource scarcity. No matter how smart it is, there are still physical limitations that can't be ignored.
TL;DR the singularity is more likely to produce the WEF vision of living in ze pod and eating ze bugs, rather than whatever prosaic Garden of Eden you're imagining.
Jump in the discussion.
No email address required.
Notes -
No way. The environmentalists can't stop fracking, and even nuclear power is having a renaissance. We aren't going to accept a dismal Malthusian future like OP suggests out of a desire to leave all of the rest of the mass-energy of the universe pristine.
It's about control. The deep state bureaucrats, no matter what state they ostensibly serve, the people who matter aren't going to just let some kooks run away and set up their own civilization out of reach somewhere.
That's insane. Letting people get out of your grasp and do what they want ? Who knows what they'd get up to?
Besides, we haven't fixed poverty/sufferingon Earth yet, and seeing as poverty can be endlessly redefined and suffering is unavoidable, that excuse will also be used.
This is simply not true. Some people are born with a rare genetic mutation that prevents them from feeling suffering (not to be confused with insensitivity to pain) and live their lives to the fullest, have children, etc. They still get the pain signal but not the associated suffering.
I didn't even have pain in mind, as it's really not relevant not particularly a big part of suffering.
Even so, you can absolutely never feel suffering with different genes. There is nothing unavoidable about suffering, it's simply a brain function that can be inhibited or suppressed.
That's extremely debatable as so far we not only can't objectively gauge internal states of mind and neither can we modify them.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I feel like you and I are talking about completely different universes.
By the time we are running out of resources on Planet Earth -- OP's premise -- we will long since have developed strong artificial general intelligence, which will be strongly superintelligent by human standards. Ideally we meatbag humans will have long since have had our minds uploaded or otherwise emulated on a giant planetary datacenter that the AGI has built. At that point it is presumably the central singleton AGI that will be sending Von Neumann probes into the galaxy, supercluster and beyond in order to convert the mass-energy of the universe into an ever-expanding superintelligent hivemind that spans the light cone. Once the entire light-cone has been harvested and optimized, and we have tried and failed with those resources at our disposal to avert or circumvent the heat death of the universe, I will be more sympathetic to claim that there are no more remaining resources to be harvested, and at that point the name of the game will be managing scarcity rather than looking for more. Until then, not so much.
But if you are imagining that, once we have exhausted all of the resources of Planet Earth, and maximized our technological progress with those resources, we are still going to be meat-based human beings wandering around on our legs and communicating with our vocal cords and manipulating our environment with our fingers and opposable thumbs, then we have very different understandings about what is possible with the resources at our disposal, and if that is the case, then I agree that our two worldviews are unlikely to make similar predictions about the future.
I'm not imagining we're going to do anything of that, our most likely fate is extinction and replacement by something else. In the middling-probable scenario where AI doesn't kill us, it's going to be used to cement existing inverted totalitarian nonsense so it endures longer.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link