The most common response to "AI took my job" is "don't worry, soon the AI will take everyone's jobs, then we'll all have UBI and we won't have to work anymore." The basic thesis is that after the advent of AGI, we will enter a post-scarcity era. But, we still have to deal with the fact that we live on a planet with a finite amount of space and a finite number of physical resources, so it's hard to see how we could ever get to true post-scarcity. Why don't more people bring this up? Has anyone written about this before?
Let's say we're living in the post-scarcity era and I want a Playstation 5. Machines do all the work now, so it should be a simple matter of going to the nearest AI terminal and asking it to whip me up a Playstation 5, right? But what if I ask for BB(15) Playstation 5's? That's going to be a problem, because the machine could work until the heat death of the universe and still not complete the request. I don't even have to ask for an impossibly large number - I could just ask for a smaller but still very large number, one that is in principle achievable but will tie up most of the earth's manufacturing capacity for several decades. Obviously, if there are no limits on what a person can ask for, then the system will be highly vulnerable to abuse from bad actors who just want to watch the world burn. Even disregarding malicious attacks, the abundance of free goods will encourage people to reproduce more, which will put more and more strain on the planet's ability to provide.
This leads us into the idea that a sort of command economy will be required - post-scarcity with an asterisk. Yes, you don't have to work anymore, but in exchange, there will have to be a centralized authority that will set rules on what you can get, and in what amounts, and when. Historically, command economies haven't worked out too well. They're ripe for political abuse and tend to serve the interests of the people who actually get to issue the commands.
I suppose the response to this is that the AI will decide how to allocate resources to everyone. Its decisions will be final and non-negotiable, and we will have to trust that it is wise and ethical. I'm not actually sure if such a thing is possible, though. Global resource distribution may simply remain a computationally intractable problem into the far future, in which case we would end up with a hybrid system where we would still have humans at the top, distributing the spoils of AI labor to the unwashed masses. I'm not sure if this is better or worse than a system where the AI was the sole arbiter of all decisions. I would prefer not to live in either world.
I don't put much stock in the idea that a superhuman AI will figure out how to permanently solve all problems of resource scarcity. No matter how smart it is, there are still physical limitations that can't be ignored.
TL;DR the singularity is more likely to produce the WEF vision of living in ze pod and eating ze bugs, rather than whatever prosaic Garden of Eden you're imagining.
Jump in the discussion.
No email address required.
Notes -
Disclaimer: From pervious conversations it's plain to see you'd prefer to have AI fail, and thus argue that it will result in a dystopia, and would prefer many outcomes commonly recognized as dystopias to an Eden-like AGI-powered success even were it proven to be trivially possible. I am similarly pessimistic with regards to likely outcomes, but confident that the progress can't be stopped by rhetorics, and oppositely biased in principle.
In the case of us succeeding with the development of AGI or deploying roughly current-gen AIs to their logical conclusion, your scenario won't be a problem.
What you offer is a classical right-wing gotcha against Communism, one based on the potential limitlessness of human desire and ahistorical thinking. Post-scarcity does not imply logical impossibility of scarcity, only that we do not run into resource constraints under consumption regimes that are currently pursued. Old Soviet joke:
– Izya, when Communism comes and everything becomes free, what will you get?
– You actually asking? A fighter jet, duh.
– What can you even use a fighter jet for!?
– Well use your head. Suppose we here in Odessa learn that they've brought some salt to one store in Vorkuta. Now we can't do anything before it's all taken. And under Communism – I hop into my jet, fly there and get into the queue early!
Actual Communist theory is protected from this by ideas such that their desired formation is a product of long societal evolution resulting in a New Type of Man, who won't be inclined to troll the community with absurd indulgence (and that salt post-scarcity will come much earlier than fighter jet post-scarcity). But in any case, AGI does not necessarily result in Communism, UBI isn't Communism too, and techno-optimism is informed more by supply-side economic thinking. If Communist practicians – especially Soviets – believed very much in the flexibility of man and all-surpassing power of indoctrination and thought they can rush the NewMan-isation process by decree; supply-side is the opposite.
It preaches that technology is mutable whereas society is fixed (a parallel for biology springs to mind immediately). We cannot cleverly reinvent the habits of a social animal that is man, not at any reasonable scale and to a good effect; we can grow the pie so much that even the bottom percentiles get their basic needs met and more barbaric practices die out on their own. This belief is a product of disillusionment. It is also arguably self-serving when it comes from people around the top percentiles and their court economists, an addendum to their market gospel. But it seems they are the people closest to AGI.
Further, progress in this area is not rapid enough to provide a shock that would enable a deep rethinking of social order. We will keep consuming new surplus, allocating it in accordance with the current economic process at every step. Structural changes will be slow. For a while, people will keep losing jobs; productivity will keep rising; entrenched IT corporations with «data moats», «data flywheel» and «compute self-sufficiency» will keep gobbling up talent and startups, marginalizing smaller players in more traditional niches, but not displacing the state power. Accordingly they will pay an increasing share of taxes; new welfare programs and bullshit jobs or programs for «reskilling» or something will be created, and only after a long intermediate period will this plausibly coalesce into a single institutionalized UBI, which, necessarily, will also be scarce enough to not worry about this zany stuff.
I also suspect that there will emerge a two-tier economy, with toy money for the little people, UBI consumers and such, modeled after food stamps, gift cards and various game/store tokens; and Adult Money for professionals and their employers in the propertied class; this will neatly help in regulating access to dangerous things like computers accepting arbitrary code, network access points, scientific equipment and weapons.
And as others say, much of the economy will be virtual. Why buy a ton of Playstations, or even just one Playstation really, when you can play 24/7 at most? And you will be happy doing that, not owning even your state-provided VR full body immersive set.
But it won't be abjectly tyrannical.
You face real threats even without the pod-bug routine and weird Communist takeover.
(Incidentally: tried to record this to transcribe with Whisper, a model unexpectedly released by OpenAI a few days ago. Whisper is great but looks like my BT drivers were borked. Oh well, typing it is. For a little while more).
Two quotes.
Sam Altman of OpenAI, now:
What is the new social contract? My guess is that the things that we’ll have to figure out are how we think about fairly distributing wealth, access to AGI systems, which will be the commodity of the realm, and governance, how we collectively decide what they can do, what they don’t do, things like that. And I think figuring out the answer to those questions is going to just be huge.
I’m optimistic that people will figure out how to spend their time and be very fulfilled. I think people worry about that in a little bit of a silly way. I’m sure what people do will be very different, but we always solve this problem. But I do think the concept of wealth and access and governance, those are all going to change, and how we address those will be huge.
[…] Yeah. So we run the largest UBI experiment in the world. We have a year and a quarter left in a five-year project. I don’t think that’s the only solution, but I think it’s a great thing to be doing. And I think we should have 10 more things like that that we try.
Marshall Brain, Manna – Two Views of Humanity’s Future:
“That’s what I wanted to ask about. If everything is free, then what’s to stop me from demanding a 100,000 foot house on a thousand acres of land and a driveway paved in gold bricks? It makes no sense, because obviously everyone cannot demand that. And how can anything be free? That is hard to believe in the first place.” I said.
“Everything is free AND everyone is equal.” Linda said. “That’s exactly how you phrased it, and you were right. You, Jacob, get equal access to the free resources, and so does everyone else. That’s done through a system of credits. You get a thousand credits every week and you can spend them in any way you like. So does everyone else. This catalog is designed to give you a taste of what you can buy with your credits. This is a small subset of the full catalog you will use once you arrive. You simply ask for something, the robots deliver it, and your account gets debited.”
“Let me show you.” said Cynthia. She opened her catalog to a page, and pointed to one of the pictures. It was clothing. “This is what I am wearing.” she said. “See – it is 6 credits. In a typical week I only spend about 70 or so credits on clothes. That’s why I like to wear something new every day.”
“The robots did manufacture Cynthia’s outfit for free. They took recycled resources, added energy and robotic labor and created what she is wearing. It cost nothing to make it. She paid credits simply to keep track of how many resources she is using.”
“Where did the energy come from?” I asked.
“The sun. The Australia Project is powered mostly by the sun and the wind, and the wind comes from the sun if you think about it.”
“Where did the robots come from?”
“The same place Cynthia’s outfit came from. It’s the same thing. Robots take recycled resources, add energy and robotic labor and make new robots. The robots are free, the energy is free, the resources are all completely recycled and we own them, so they are free. Everything is free.”
“The credits simply make sure that everyone gets equal access to the resources. There is a finite amount of power that can be generated on any given day, for example. Things like that. The credits make sure everyone gets an equal share of the total pool of resources.”
“Holy shit.” I said. [...] Page after page after page of products. There were thousands of different types of housing, for example. And they all seemed to fall in the range of 100 to 500 credits per week. Clothing cost nothing. Food cost nothing.
“I’m not getting this.” I said. “I’m not sure I could spend a thousand credits if this catalog is right.”
“Many people don’t spend a thousand credits.” she said. “If you are working on a project you might, but that’s about it.”
“So how do I earn the credits?” I asked.
“Earn?” Linda asked back.
“No no no…” said Cynthia.
“Do you give me a job? The reason I am here is because I have no job,” I said.
“No. You see, it’s all free. By being a shareholder, you already own your share of the resources. The robots make products from the free resources you and everyone else already owns. There is no forced labor like there is in America. You do what you want, and you get 1,000 credits per week. We are all on an endless vacation.”
I am not sure what this means.
Like OP, I also think that the development of AI capabilities will result in a dystopia, just not because of any silly problems inherent to automation and post-scarcity conditions. It's a much more trivial issue of political control, current de-facto power structure evolving into a singleton once having acquired tools of sufficient power. Other outcomes, utopian as well as libertarian, are technically if not politically feasible.
More options
Context Copy link
More options
Context Copy link
Just to note, the Marshall Brain link gives me a privacy error in Brave.
Expired certificate, you can just ignore it.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link