This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
What would be a good outcome for the automation of knowledge work?
Everyone’s been talking a lot about both the downsizing of the federal government, and the rapid improvement of LLM technology, such that the fake jobs are being cut at the same instant that more jobs are becoming to some degree fake. I don’t necessarily think that the US government should be a bastion of fake jobs, especially Culture War ones, but at the same time I wonder if there’s any end game people like Musk are working toward.
As far as I can tell:
Blue collar jobs are still largely intact. There’s about the same need as there ever was for tradesmen, handymen, construction workers, waste disposal, and so on. Most of the automation in those fields came from vehicles a century ago, and there doesn’t seem to be much of a push to leverage things like prefab construction all that much more. I personally like the new “3-D printed” extrusion style of architecture, but it doesn’t look like it actually saves all that much labor.
Pink collar: Childcare takes about the same amount of labor per child, but there are fewer children. Nursing is in demand, but surely healthcare can only take up so much of the economy. Surely? Retail continues to move online, and we continue to descend into slouchy sweatpants, parachute pants, and the oversized, androgynous look. I would personally like it if some of the excess labor went into actually fitted clothing, but haven’t seen any signs of this. Cleaning services seem to have more demand than supply, with an equilibrium of fewer things getting cleaned regularly than in the past, while continuing to be low in pay and prestige, so I’m anticipating more dirt, but little investment into fixing it.
Demand for performance based work seems to be going down. It’s just as good to listen to or watch a recording of the best person in a field than a live performance by someone less skilled. But were performers ever a large part of the economy?
Middle class office work, knowledge work, words, paperwork, emails: seems about to implode? How much of the economy is this? Google suggests about 12%. That seems like a lot, but nothing close to the 90% of farm work that was automated throughout the 21st Century. This article was interesting, about the role of jobs like secretary, typist, and admin assistant in the 20th Century. I tried working as an assistant to an admin assistant a decade or so ago, and was physically filing paperwork, which even then was pretty outdated.
The larger problem seems to be status. What kinds of work should the middle class do, if not clerk and word adjacent things? There seems to be near infinite demand for service sorts of work – can we have an economy where the machines and a few others do all the civilizationally load bearing work, while everyone else walks each other’s dogs and picks up each other’s food? My father thinks that there’s less slack in many of these jobs than when he was younger. I’m not sure if that’s true in general, or how to test it.
I don’t necessarily have a problem with a future where most people are doing and buying service work. The current trend of women all raising each other’s children and caring for each other’s elderly parents seems to not be working out very well, though.
A lot of white collar office work could have been automated a while ago, it just wasn’t for status reasons- bosses like having direct reports and customers like having actual people. Likewise, sales will always be there. Regulated professions may be playing solitaire while a computer works uninterrupted, but the board of CPA’s will ensure work for their members, even if it’s mostly to make sure there’s someone to sue when it all goes wrong.
All in all I would estimate that a fairly small portion of work gets automated.
More options
Context Copy link
In Ben Franklin's 1729 pamphlet A Modest Enquiry into the Nature and Necessity of a Paper Currency, he observed that true wealth consists of the ability to command labor. This framing positions wealth as almost inherently a positional good. While it is not true exactly, it is true that currently 100 inch flat screen TVs, infinite food, and dishwashers can't replace the benefits of human laborers to do your building. And clearly, it is impossible for everyone to pay others to do more labor in aggregate than they themselves are doing, as the demand would be eternally greater than the supply.
Humanoid robots have the potential to break this bottleneck for the first time in history. The true promise of industrial society - to eliminate the positional nature of wealth - will become a possibility. It will happen gradually, then all-at once. But above all, it is happening soon and 99% of people are not mentally prepared, or are in denial.
More options
Context Copy link
This will come for blue collar jobs pretty soon too.
Consider meat processing: parting out chicken or pork carcasses is something that’s hard to automate. Every carcass is slightly different, and the nature of the tasks makes it hard to build a machine that will do this with good enough accuracy and low enough waste.
Now, imagine we have robots with flexible arms like humans. Current AI tech solves the image recognition problem, so that the robot understands the carcass like human does. It also solves explaining the purpose of the task, so that the robot understand the actual purpose of separating thighs or breast, instead of just mindlessly following the programmed moves. Lastly, it solves the reasoning part, so that the robot can plan the task independently for each carcass, and adjust to conditions as it proceeds.
All that remains is integrating these into one performing system. This is by no means an easy task: it will still probably take years before the finished product is cheaper and better than illegal immigrant. However, 5 years ago, the idea of training robots to part out chickens was complete science fiction.
Absolutely not. You think it's basically straightforward because you're human and you take your senses and capabilities for granted.
Imagine that you have to part out a chicken carcass but:
Unless you're planning to cut up the chicken with a circular saw, you also have to figure out how to analyse the structure of a carcass, and how the meat will react under manipulation. This data doesn't exist right now so you're going to have to train it on your own data, which means you need to find a way of obtaining and labelling that data.
EDIT: Sorry if this came across as harsh. I agree that we've gone from 'we have no idea how to approach this problem' to 'solving this is really REALLY hard'. Mostly what I want to say is that "Now, imagine we have robots with flexible arms like humans." is a much bigger deal than you think it is (and not theoretically solved as of now) and I think that training the relevant AI is much harder than you think it is.
I've butchered meat before - nowhere near at the level of a professional processor for Tyson, but enough to have the basics down.
You've correctly itemized many of these challenges. Still, when I see the primitive human robots of today and apply our current rate of technological process, these all seem eminently solvable very soon.
Likewise, I will bemoan missing the occasionally overstuffed Taco Bell burrito the blazed-out-of-his-mind fast food worker occasionally serves me. But I'll appreciate that my order will be ready when I get there 100% of the time and the missing flavor of subtle racial animus.
To me the edge cases are going to be home services for a while longer yet, where tight spaces (the ability to suck in your gut) and ingenuity/hacking are going to require that human touch a little longer than food factories.
A humanoid robot is not the right tool for the job though -- what you want is a machine with sharp knives matching the number of joints on a chicken mounted to some kind of press, plus several hooks that can grab the carcass and align it appropriately. (the knives probably need to self-adjust too, depending on the size-consistency of you chickens)
Machine vision probably helps with this some, but as others have said "object segmentation" was a pretty solved problem years ago -- and there's no AI anywhere close to performing at the "I need you to cut this chicken apart at the joints, m'kay" level on the forseeable horizon.
There's a reason why welding bots are not humanoid form -- humans are generalists, bots are not.
We're all speculating here. It's all going to depend on the timing and use cases. But imagine a factory that's sunk millions in capital for their human driven processing.
They can re-do all that with hyper-specialized machines, dozens of vendors, the nightmare of IT/OT interactions (doing a project on this right now in bottling actually). Which they probably do every couple of decades.
Or they can wait for a humanoid robot with these capabilities and drop them almost completely in-place.
Humanoid robots work with existing interfaces. With sufficient image recognition quality and human-like sensory capabilities, they're going to fit in way more jobs. Think of the difference in outlay between training a single humanoid robot to cut chicken legs (which is doable by illiterate illegal immigrants) compared to the expense of developing and deploying a hyper-specialized machine.
This would be more convincing if humanoid robots existed -- or llms were able to control them. If you ask an LLM "how do you break down a chicken?" it will probably give you a pretty good description that a human could follow -- this sort of thing is well represented in its training set. If you ask it for a program to activate the servos of a hypothetical knife-wielding humanoid robot such that a chicken if front of it will be disassembled, it will give you utter trash. (if it doesn't demur)
It's a pretty good example of the difference between an intelligence and language model actually -- a language model can describe things, and AI can do things.
All that to say, if you want your chicken factory automated, waiting for a humanoid robot so you can drop it into place is not a very effective approach. Buying some machines from the Dutch would work much better.
LLMs != AI. Critical here - the models for understanding physical feedback while cutting aren't going to be built from scraping Reddit.
One thing I will concede is that these hyper specialized machines are going to have other physical advantages. A humanoid robot will take up humanoid space. When you compare it to these automated cutting machines elsewhere in the thread, the latter has more throughput than a humanoid interface would at even superhuman speed.
Agreed!
(that means that there is no AI at all though -- and the sheer effort/$ being devoted to LLMs is if anything making it less likely that there will be anytime soon.)
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I’ve done butchery before- you absolutely do not want this. You need a generalist robot at least to start with.
The Dutch company video somebody linked downthread shows it done with rotating knives and alignment guides, not robots at all -- which seems to work, and is not at all generalist.
The approach I'm imagining involves laying the carcass out flat on a cutting board, holding it with the robot hooks, and slicing off limbs based on the location of the joints as determined by AI(tm). Probably another stage for de-breasting is needed -- or the hooks could take another bite or something.
I don't really claim that this would work well; certainly not better than the machine in the video -- but it would work better than some non-existent humanoid robot attached to a non-existent AGI.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I didn’t say it will be easy. What you describe are real problems. However, they are not as insurmountable as AI was 10 years ago. 10 years ago, there was relatively little investment in touch sensors, because even if you perfected them, there was little you could do with them. Now it is different.
My point is that AI advancements allow us to leap over solving problems by designing tool paths and configuration spaces, and onto solving problems by telling a robot “we need you to cut chicken, look how it’s done and imitate”.
A LOT of stuff is gated behind advances in (imitation) reinforcement learning + real-time adaptation. Especially soft robotics - if you can learn and update the material's dynamics on the fly rather than trying to model them mathematically then I think many doors open.
More options
Context Copy link
This seems like the self-driving car redux. They improve, and they improve... and at some point they stop getting better because the remaining problems are intractable.
But the thing about self-driving cars is that they are already better than human drivers. There's just a huge wall of tradition preventing them from becoming much more widespread.
Meat processors aren't going to give a shit if robots can only attain 98% accuracy or something
More options
Context Copy link
Sure, but, you can take a Waymo if you are I'm a city they service. It works to the point that self driving cars are right now successfully self driving around a few cities.
More options
Context Copy link
With the caveat that these plateaus tend to be bottlenecked by specific problems. AI moves like glaciers - sometimes they stick, sometimes you get lucky and the pressure shifts something and then a thousand tons of ice move at once.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I agree 100% with all of this, but in the longer-term, robots will have other offsetting advantages.
Humans have 2 arms and 10 fingers. Robots might have 50 arms and 1000 fingers. Humans have 2 eyes in a fixed position. Robots might have hundreds of eyes, including flexible antennae that can look into crevices and small spaces. Robots can have wheels, they can have saws, they can have radar, sonar, lidar, flashlights, they can fly, they can be tiny, they can be huge, they can swarm with other robots. And they can be iterated on.
More options
Context Copy link
More options
Context Copy link
One other big challenge currently is latency. Those big reasoning & purpose models are not quick compared to the speeds industrial automation of that sort tends to run at.
Basic image recognition I'd say has gotten there, but not the rest of it.
More options
Context Copy link
5 years ago we already had robust image segmentation models based on labelled data https://paperswithcode.com/sota/instance-segmentation-on-coco Given the controlled lighting and camera angles on a factory floor, it's definitely tractable problem with that period's technology.
Of course that period's AI would lack decision making and merely use vision as a mechanism to adjust the tool path with feedback. But processing chickens is quite mindless and mechanical, merely accounting for variation in the size and shape of the chicken. I don't see how modern humanlike AI will help here, when we end up training assembly line workers to be more mechanical.
I'd guess that making and foodproofing an industrial robot to be able to function safely in the factory environment would be the hard part. Even if the software package was already perfected, I doubt you could build the robot cheaply enough to make it worth it.
I believe that things like iphones are still assembled by hand even though those are perfectly uniform and good for automation.
Isn't this a solved problem?
I'm sure I've seen footage of automated chicken processing in the Netherlands or Germany.
For example https://youtube.com/watch?v=QIciSPOm1h0
No AI required.
Yes, and here's a machine that does the plucking / beheading / de-footing, and claims to also do eviscerating though that's not in the video. All done without AI. There are still humans in the loop hanging up the carcasses onto the machine, so possibly the question is at what point would it become profitable to replace those with automation.
While the capabilities seem impressive, I can't help but notice the difference in quality between those two machines, and legitimacy of the demo video.
The second has simpler, non-moving parts that probably degrade the quality of the product, jump cuts, and is moving pretty slowly. Can't believe that selling an expensive machine like that isn't worth paying an American a couple of bucks to read a script instead of just some shitty TTS engine.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
That's interesting, I had thought they were farther from automating meat processing. That does sound like a terrible job, anyway.
In America, it’s done by parolees(will be arrested if they don’t have a job) and illegals(no access to welfare). Law abiding citizens would rather take welfare.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
AI is already imploding the white collar world in other ways than just job replacement. Let me give an example.
AI BDR (business development representative) is one of the roles that most AI agent companies are rushing out because it’s (seemingly) low hanging fruit.
What is a BDR? It’s the lowest sales role that fields inbound requests and does outbound prospecting (cold calls, emails etc).
Cold calling used to be the best way to do outbound until 3 things happened: 1. Email, 2. decline of the office phone, and robo calling + smartphone with contacts making answering unknown calls a scourge.
Now phones are just broken as a concept. I never pick up unknown numbers and now miss all sorts of important calls like drs appointments etc.
So emails.. that worked for a while, but it’s been an arms race of attention against spam. In the last 6 months it’s broken completely Why? Because there was a really short period of time where AI BDR was a super power, human like messaging, custom not templates, even personalized to company / contact research at scale.
But the pipe has already been clogged and it’s ruined for everyone. A world of perfect AI, every company who can maybe sell me something can send a handcrafted message to me every single day. That’s millions of messages. No one AI can get through the other. Email and marketing on both sides of the equation is over.
There’s no quick fix. AI being good didn’t improve outbound sales for the seller or recipient (except for a short period inside 2024).
It just broke it. AI didn’t replace jobs, it didn’t increase efficiency. It clogged a channel with so much junk it collapsed.
This will happen in other places.
Fair warning, zoomer take here. I seriously wonder why phone numbers even still exist, and especially why they exist with barely any real security, confidentiality, and authentication requirements. Companies use them to verify identities, people call them with personal information, but the system is set up with absolutely no reliable guarantee that who you're talking to is actually the person, and not some bot, spoofed number, or sim-swapped identity thief. And we've taken things like area codes and just destroyed the whole system.
They were great -- if expensive -- I'm told, in the days of Ma Bell. But now they seem like a bolted-on addition to our telecommunications system, which is founded on the internet. And outside of the US, people don't even use SMS!
WhatsApp and telegram still rely on your phone number. It’s just an app that sends texts by data rather than as an SMS- either because it’s cheaper($0.10/text adds up in countries that don’t have unlimited texting) or because of features offered by those apps(eg ability to create groups or whatever).
As a tracking / anti-spam measure, not because it's technically necessary.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I think AI will take most jobs, for sure. But I also think a lot more human interaction is going to have to take place in person from now on because how easy it will be to fake even HD video calls that are imperceptible very soon.
I think the 'Star Wars future' is a best possible outcome here. The internet is ruined, in person interactions is the norm and AI is a lot of ideosyncratic robots who have to talk to eachother and the internet for us because it's all so incomprehensible for an average human.
Thanks to a competency crisis in IT that's been outsourced to robots, nobody really programs anymore, but we can rely on AI/robots to do that for us. Simultaneously that also forces more things back into a more mechanical world and skillset as people spend more time interacting with their high tech environment materially rather than digitally / through pixels.
More options
Context Copy link
More options
Context Copy link
Interesting. My impression of advertising is that it was already substantially clogged, to the extent that it hardly matters if an email is personal or not, in fact a personalized message from a stranger is actually more suspicious than a normal advertisement, it's probably going to be some kind of scam.
I think my mobile phone company has some sort of spam filter, because the only unwanted calls I get are from the politicians in a jurisdiction I once registered to vote, so plausibly I opted into that.
Lately, I've found myself ignoring or marking as spam pretty much all business emails, and following them on social media instead. This is despite being the sort of person who reads blogs that are basically advertisements. I'll be annoyed when Google reviews, Amazon reviews, and Reddit posts get filled up even more with AI entries, but that was probably going to accelerate even without AI.
Yeah this narrative seems completely false from my experience. Email has been an abysmal channel for years now. Spam filters killed off much of its efficacy like a decade ago. Gmails "Promotions" tab annihilated mass market email advertising well before LLMs.
It's still active in b2b sales/account management work.
It is, helped along by Microsoft's crappy junk mail filter
More options
Context Copy link
I should perhaps have added that if I compare email open rates/click through/whatever now versus 2/5/10 years ago in the companies I have worked for, they're pretty much the same. No evidence for any further decline in effectiveness yet
More options
Context Copy link
More options
Context Copy link
I do still get a lot of ad-type emails in my main inbox, but then, I haven't "trained" Gmail to move some types to Promotions.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
A similar sort of arms race is happening in hiring. Resume -> run through LLM to fake effort ('personalize') for a billion jobs -> hiring managers run through LLMs to try to regain sanity in number of seemingly-effortful applications. It's a rather unfortunate Nash equilibrium.
Anecdotally I've seen a lot more 'have any friends that would be a decent fit for this job?' style hiring lately than I have in a long time. And I suspect that may be one of the major outcomes - more web-of-trust style communication.
This is easy to solve: just flip the script. Have the recruiters and hiring managers reach out to people. All you need is a job market clearing house, where job seekers advertise their interest, and companies make the first move. Clearing house verifies identity of job seeker, to prevent creation of multiple profiles, and charges companies a fee per contact, so that they don’t spam people indiscriminately.
This works, because this model has been very common on the tech industry. In my dozen+ of years in this industry, I only ever cold sent my resume to one company, for an intentship. I got that job, and from that point it was always recruiters reaching out to me.
There have been many many many attempts at such a clearing-house.
They all end up falling over sooner or later due to misaligned incentives. Sooner or later someone gets the "bright" idea of charging people for "premium" access. Which can kind of work for a while, until 'premium' turns into 'priority'. And sooner or later someone realizes "wait, we get more money charging month-to-month if people stay on our website instead of leaving because they got a job", and start arranging things to have near-misses as opposed to good fits.
Same cycle that happens in dating markets.
More options
Context Copy link
This only really work when there is an oversupply of jobs compared to "qualified" labour.
The purpose of a CV, personal letter etc. Is often more an attempt to pre-empt part of the interview process than matching credentials to job requirements (both of which often are inaccurate). The entry of AI's here makes things less like a clearinghouse because you get less useful information before the interviews.
More options
Context Copy link
Does this really not exist yet?
LinkedIn. With all the associated pathologies. But you have to be established and have some skills to sell.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Maybe boomer advice will become relevant again, and we'll start having to approach potential employers in person.
For that, you'd have to break HR's stranglehold on the process. Strongly in favour of, by the way.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Here's one. Google image search is now littered with AI slop.
Another is google ads. There’s an apocalypse brewing here for companies that rely on adspend for inbound driven pipeline. It’s falling off a cliff
More options
Context Copy link
Google regular search is also littered with AI slop.
Someone said in a lower thread the internet will be the first casualty of AI and I tend to agree
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
The only real solution I'm aware of is some form of Universal Basic Income. In other words, if the economy explodes as human cognitive and physical labor is automated, then governments tax it and redistribute it.
This will likely prove unpopular with the people and entities being taxed on their newfound wealth, and it remains to be seen whether governments/democracies will listen to their anxious and unemployed populace over entrenched interests who now hold most of the money and power.
I don't think the likelihood of this happening is high enough for me to relax and take it for granted.
Even if UBI was a thing, that doesn't necessarily mean that inequality wouldn't be. The future uber-wealthy might well be the descendants of those who already had existing wealth, or at least shares in FAANG. I'd take this as acceptable if it meant I wouldn't starve to death.
Blue-collar work won't be safe for long either. We're seeing robotics finally take flight, there are commercial robo-taxis on the road, and cheap robo-dogs and even humanoids on the market. The software smarts are improving rapidly, and so is the hardware. Humans are going to end up squeezed every which way.
There are no reassuring answers or easy solutions, but at least hope isn't lost that we'll come out of this unemployed yet rich beyond our wildest dreams. It only takes a trivial share of the light cone to make billionaires of us all, assuming the current ones will deign to share.
UBI would be a new inflationary pressure, as it directly increases the money supply. Our Federal Reserve would need to to interpret that as another tool along with interest rates - if they were to employ it effectively.
(I'm not sure how things work across the pond.)
More options
Context Copy link
Why do we need UBI? If AI ends up being as cheap, efficient, and transformative as people want to claim, it should drive down the price of all goods to near 0.
I think what's more likely to happen is AI compute is going to be an effective currency replacement. Rather than using fiat dollars, it will be based off of the amount of AI runtime it takes to complete a task for the given runtime/ energy calculations of any given workload. Assuming AI can replace all jobs and produce a quality of life better than any human machination can contrive, then the human inputs for the production of goods and services should be 0.
The value of goods is not based solely on labor -- that's Marxism. The value of goods is based on scarcity, which cannot be alleviated by AI (even with advanced robotic labor) for two reasons:
We live in a physical universe with physical limitations, on a single molten rock with limited, albeit abundant, natural resources. The price of the phone in your pocket is based in part on the physical materials used to assemble it, and any use of those has an opportunity cost. The glass in your iPhone can't be used in someone else's Android. So both the raw materials and the assembled goods have an inherent value because they are scarce and alienable.
Human consumption has a huge status component. Even if AI-powered robotics could produce any and all goods, human labor and artistry will still remain valuable, perhaps even moreso, because of its scarcity. Inevitably there will be profit to be made in appealing to conspicuous consumption, and so profit there will be.
More options
Context Copy link
The income of most humans in such a scenario would also be nearly zero. Cognitive and physical labor would be entirely devalued.*
I'd expect anyone with even a modest amount invested would see it soar, and even savings would elevated in terms of purchasing power.
The question is whether this will be enough.
*Even the absolute minimal human existence requires about a hundred watts of power and raw biological feedstock. You can't lower your wages lower than this without dying, and every dollar that could be spent on food and shelter would be much better spent elsewhere. A comfortable existence would be significantly more expensive. I think in a worst-case scenario, humans would be killed outright, slightly less worse but awful would be us being outcompeted and left to die by an uncaring ASI, in less bad scenarios marginalized and unable to meaningfully engage in agency.
More options
Context Copy link
More options
Context Copy link
UBI I think has too many problems to work.
First of all, it’s dependent on getting the money in the first place, and it’s probably pretty trivial to renounce citizenship and bugger off to a tax haven today, and given that “owning AI” doesn’t require you to be in the country at all, there’s nothing tying the guy who owns the company to the country the AI is in.
Second, keeping the UBI within reasonable limits is impossible. There will be millions of voters with hands out to collect UBI, and maybe 100 people paying for it. When the chance comes to vote on benefits and taxing the owners to pay, the only vote that keeps the politician in power is “raise the payout!” Eventually this becomes unsustainable as you tax 95% of the income of tge three people doing anything productive to pay the millions who aren’t.
Third, a population controlled by dependence on government handouts to survive is not free. You can get people to do anything you want if the alternative is “lol no money for you”. And this will be 99% of the population. That’s not something to get into lightly.
Say what you want about Andrew Yang, but his idea to tie UBI to a VAT might work. It doesn't matter where the wealthy are, if they want to buy or sell in the American market it pays into the system.
Maybe. VATs get tricky for items never sold.
If I spend most of my resources on researching and building a shiny and state-of-the-art automated widget-making system (that I will never sell) such that I am able to make & sell widgets dirt-cheap, how much VAT does a widget add?
Answer: far less than before I built said system.
This sort of thing pops up a lot in tech - fabs, VLSI chips, and software development all often fit this pattern where most of the cost is internal and as such somewhat nebulous.
As you build your shiny, state of the art system you are purchasing items from other businesses and those items will be taxed.
If you're just saying that, "As things get less expensive, VAT will decrease," then yes, that's true, but so will the amount of UBI needed to maintain a standard of living.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
This may sound silly, but presuming we get superintelligent-but-completely-domesticated AI, a government could possibly just tax the AI itself. In this scenario, a government asks an AI to pay some tax based on the money it's earned from serving and working for people. Granted, this requires the AI to actually have meaningful access to the relevant pursestrings.
Yeah, if the AI is doing stuff inside the country, that's like "establishing a business presence". Then, you can just tax whichever entity to your heart's delight.
This highlights one of my favorite contradictions about the "AI will result in everyone starving because there are no jobs" doomerism. To make it work, you need some weird, strong separation between the AI-haves and the AI-have-nots. They're, like, totally isolated, and no trade is possible, because the AI-have-nots are supposedly worthless or something. Thus, why the AI-haves are supposedly buggering off to some island tax haven or something. But then, if there's literally no trade happening between these two groups, one has to ask, "Why wouldn't there be trade within the group of the AI-have-nots?" The only answer I can think of is that pretty much the vast majority of their wants and desires are already being fulfilled by some other mechanism. But that, of course, leaves us decidedly not in an "all the AI-have-nots are starving to death or something" situation.
Te AI is in Bahamas, it’s making decisions for a business in the USA. Who gets the tax money?
As for the AI have nots starving, this is how history has tended to work for most of human history. When a worker has no useful skills he gets laid off permanently, and either subsists on a dole or goes hungry. The Industrial Revolution was also a time of great poverty with thousands reduced to living in tiny tenement housing. The Victorian Era had people living underground as it was illegal to be homeless.
What’s unprecedented here is the sheer scale of the problem. There’s no reason to think that a government can permanently and sustainably put three quarters of the population on welfare and still function. Nor do I find it plausible that millions of people with no prospects of useful employment are going to thrive. We have historical examples of people in that situation, and none of them have produced Utopian societies. Indian reservations are impoverished shit holes compared to the surrounding communities. So are ghettos. Rome created a huge underclass full of dysfunctional families with her dole. Turning all of America into a giant reservation where everyone lives on the dole is not going to create a flourishing society that creates hippy art. It’s going to create. Poverty and corruption and dysfunction.
If the US wants that tax money? The US clearly and obviously can do this right now (and in many ways, they do), without an AI involved.
For many of the periods you're talking about, the vast majority of the population was actually doing subsistence farming. Obviously, this was not a nice life, especially given their level of tech, with even extremely rudimentary advances still on the horizon. They were much more at the mercy of things like weather patterns. While going off to work the land had downsides, it was an alternative. If a bunch of folks basically had to go off and do that, they could again trade with one another, coming from a baseline of ideas/tech that they could generate in-community that is significantly higher than what was possible at those times.
More options
Context Copy link
More options
Context Copy link
I think this scenario is a lower tech version of the paperclip maximizer: the AI haves simply don't value the well being of the have nots, and take up all the resources, desirable land, etc with their superior technology. In the extreme case think something like: these N acres could produce wheat for 100000 people, or they can be used to pasture grass-fed, free-range, spa lifestyle cows to produce one weekly meal of 10 aristocrats.
This seems very very unlikely, but not impossible, looking forward 30 years feels like a crapshoot right now!
This is the part where they're operating in your country, and so you can do stuff like taxing them. Unless it's also like the paperclip maximizer in that we assume that any attempt whatsoever to do things like that results in them just casually killing you. My point is that then you have a different problem. You have a, "AI-haves killing people problem," not a, "AI-have-nots just starve because they're unable to produce/consume sustenance-level calories (or stuff worth sustenance-level calories).
Similar if it's, like, China who gets AGI/ASI and they start conquering and to conquer in order to take up all the resources, desirable land, etc. You don't have a "AI-have-nots just starve" problem. You have an AI warfare-and-killing-people problem.
More options
Context Copy link
That's basically the gist. The rich own all the resources just like they do now, but as the AI becomes more and more capable, the rest of people can't even offer their labor in return for resources because they can't compete.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
They weren’t ever a huge load-bearing part of the economy, but it was a lot larger than it is now. Back before video and audio recording technology, if you wanted to listen to music or watch a play, someone had to do it live for you. That meant there were a lot more paid performers, and a lot more people skilled in music and the performing arts as a serious hobby. While the economic loss was relatively small compared to say, the loss of manual labor, it does mean there are more people who feel unfulfilled because they will never be able to support themselves doing what they love. It’s a psychological loss similar to the complaints you hear by artists about AI drawings.
More options
Context Copy link
Every man a project manager!
No, seriously - if AI trends continue, it might be good at writing memos, doing research, constructing arguments, finding citations, booking meetings, constructing presentations, drafting architectural plans, etc. If every office worker gets that capability at his fingertips, it (in theory) means that pretty much anyone who is decently literate and competent can then supervise loads of AIs doing loads of work - because AI ain't gonna prompt itself. Competition will keep the price down on AI, whereas if each man is suddenly 8x as productive he might be able to bring home a managerial salary.
I suspect things won't turn out quite this way (or at least not for a while) but hey it wouldn't be so bad an outcome.
I have project managers who work with me. I don't want to become one.
More options
Context Copy link
I do think AI has made programmers more productive, and that it is about to make them much more productive to the point where they might be essentially project managers.
What remains to be seen is what economic benefit that will have. For example, if there are 10x as many video games as there were before, do they create 10x the economic value. Of course not.
As the economy becomes more digital, it becomes a lot harder to quantify gains in GDP. In 1987, Legend of Zelda might have cost $50. Today, for almost the same price, you can buy a Zelda game with far better graphics and a much longer story line. But does the consumer today get more enjoyment from 2025 Zelda that from 1985 Zelda? I don't think so. The hedonic treadmill is real.
Similarly, does the economy grow from making TikTok 20% more addictive. Does it grow from adding AI-generated thots to Instagram? Or from making AI girlfriends? A lot of the stuff that happens in software, maybe even most of it, is just not that important or even counterproductive.
Somewhat related...
The dream of reducing drudgery by offloading it to AI might fall flat too. AI will make it possible for a human lawyer to easily glean information from a 1000 page document. But it will also make it possible for that same human lawyer to produce a 100,000 page contract of dense legalese. Existing improvements in technology have seemingly only increased the demand for lawyers.
Video games are (mostly) saturated, although I think that AI can reduce the amount of manpower required to make a AAA game and therefore encourage experimentation and proliferation in ways we haven't seen since the 2000s.
More importantly, though, there are huge realms of software development that are mostly untouched because they're tedious and uninteresting to skilled, highly-paid software engineers. I think that AI-driven software development could vastly improve the quality and user experience for 99% of the software that ordinary people (not tech bros) use.
Anecdotally, I'm making good progress on some personal software projects now I don't have to write all the tedious bits after work.
I won't consider video games saturated until developers can create faster than "content locusts" want to consume. Currently games that provide a lot of playtime relative to developer time still have significant gaps between major deployments. Path of Exile for example had approximately three minor and one major release per year. Given that most players play 1-2 weeks after a release this produces 4-8 weeks of player time per year of development time. If you're into a more niche genre you might be looking at one or two good titles per decade.
Some people might argue that we already quietly hit this point well before the current AI craze. The struggles of the modern gaming industry and the indie scene are partly because it's (perceived as) hard to peel chronic Minecraft/Fortnite/COD/etc. players away from their comfort games.
That might be what people say but the real issue is that the games are mediocre trash. As soon as anything decent actually is released people flock to that game.
All this is (almost) only cope for bad developers.
More options
Context Copy link
More options
Context Copy link
Even if you're in a major genre like RPGs you might still only get a good game once every couple of years. That there is a sea of uninspired and boring shit out there doesn't really matter.
The only parts of the market that really are closing in on being saturated are the ones where the playtime is essentially infinite, like competitive multiplayer games.
What is happening is perhaps comparable to the book market. Does there being practically no barrier to entry mean that the market is saturated? No, it means the market for mediocre slop is saturated, which is of so low quality that the vast majority of prospective consumers have negative interest in it, or only use it as a sort of background noise to fill time. Some might even argue that there are less worthwhile books to read despite there being more words written than ever before.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Bingo.
I am slightly hopeful that 3D printing (and I guess 3D printing + AI) will get us to some good places in the tangible meatspace. I also suspect that, if Space Economy becomes real, there might be a lot of possibilities for material improvement (some of which might be tied to white collar type jobs, like Martian Rock Rover Supervisor).
Yes, and this is a horrible thought. I would be quite happy with a law banning contracts that cannot be meaningfully understood in 5 minutes. That's not a law banning even per se 100,000 pages of dense legalese but I should be able to read a contract in one sitting with no surprises. Same with a law.
(Lawyers will love this once they realize it means litigating over whether the fine print was adequately represented by the topline!)
This ties into an axiom of my political views. Give or take:
(Which then feeds forward into constitutions, hierarchical laws, kitchen-sink bills, etc, etc.)
Aha. I love this.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I expect Prompt Engineering will turn out to be the world's shortest lived career.
I think it'll turn out to be a skill, in the same way that
Human Engineeringcollaboration with colleagues is a skill. Envisioning and describing what you want in reasonably precise terms, then zeroing in on it as part of a conversation, is a skill that many people don't have. It's not going to be enough to sustain a career entirely on its own but it's going to be a big boost for one.Telling a computer what you want it to do with such clear terminology and logical consistency that it can't possibly fuck it up is just programming.
AI companies have a strong incentive to make prompting easy, and they already have, I recall the days of using the GPT-3 base model and trying to get it to do anything useful. Right now, the models are significantly smarter and in fact are quite proactive in asking clarifying questions and making useful suggestions that the user didn't know. In the limit, this makes prompting beyond a formulation of an initial suggestion redundant.
We're not there yet, but we're close. Eventually the systems will just understand intent or outright demand clarification, and fancy prompting won't add much to the equation.
I want to switch to whatever programming language this is describing.
I can imagine a language where I just name classes, then write a bunch of short method declarations and invariants and postconditions, and finally the compiler/AI figures out what long complicated algorithms and data structures will satisfy everything most efficiently, but right now it's still just not enough to be merely clear and consistent.
This, though, is hard to argue with. It feels like skill with "prompting" is more like what you need to do to trick a language model into emulating AI, not something you'd need to do to the extent a model is actually AI.
Prolog and similar logic programming languages have provided this for a while.
Huh; thank you. Looking around, I think "The simplicity of Prolog" makes a compelling argument ... but on the other hand it's a little disconcerting not to find any Prolog examples in e.g. The Computer Language Benchmarks Game. If it were a new language I'd assume there was a chicken-and-egg problem here, where people shy away from interesting-but-unpopular languages for economic reasons and then those languages don't become popular ... but Prolog is older than I am, as old as C, with open source implementations decades old. What's the catch?
Probably the combination of being notoriously slow and requiring significantly more up-front design than other languages. EDIT: And being more associated with ivory tower academics than "hackers"...
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
First off, that's not what he is describing. Secondly, that is only part of being a programmer.
Programming is both a kind of general problem solving related to how software works and can work, and the technical skill of writing the actual code.
I assume the latter part will be the first to be mostly automated away, which will greatly increase productivity, and when the second is automated fully then I'm not sure there will be any more white collar work at any level anywhere.
It's not what he was describing. It was my extension, a claim that sufficiently rigorous and exhaustive "prompt" programming is just regular programming.
I have written a few programs (they compiled! eventually..), and I am aware of all the other miscellaneous errata a competent programmer must keep in mind like dependency trees, versioning, accounting for spaghetti code and legacy code that will collapse if you sneeze at it wrong. That's what I meant
Is what I was gesturing to, taking all of that into account. I should consider myself lucky that I've never had to grapple with legacy code bases.
That's not quite what I meant. I'll see if I have enough time to respond tomorrow morning.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I didn't mean anything so stringent as programming. I only mean that reasonable clarity of thought and expression is a gift that many don't possess; the Motte is very wordcel-heavy and I think people forget this. The AI can only do so much.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
My guess is that prompt engineering won't be a career per se at all, except for possibly for artists.
More options
Context Copy link
2027 inside a Fortune 500 office:
"What if, hear me out, we just ask the AI which prompts to write and then lay off the Prompt Engineering division."
Isn't that how most AI assistants are structured? You ask the frontend LLM for a picture of a cat dressed as a ninja pirate, it asks the backend network using the whole "exquisite quality, trending on artstation" lingo.
More options
Context Copy link
If the AI is not only doing all the work, deciding how to do it and deciding what to do then there is not really a need for humans in any part of the process, not even C-suit.
Why would the government/military uphold the social contract of private ownership at that point? And after that, why not let the AI run the government and military as well? After all, the ones who don't will quickly get outcompeted.
More options
Context Copy link
"The factory of the future will have only two employees, a man and a dog. The man will be there to feed the dog. The dog will be there to keep the man from touching the equipment."
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link