This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
This sort of outcome is what makes it very, very difficult for me to take the AI doomerism seriously. Yes, we may get Paperclip Maximiser AGI, but I think it's much more likely to come about by "humans in notional charge think it will make them trillions and so follow blindly its advice" than "machine becomes agent and decides on its own goals". I have no belief in Fairy Godmother AGI that will make every single human on the planet (and that means every single human, not simply 'coastal cities PMC types') rich and happy forever and ever because it magically figured out workarounds to bypass the physical limits of the natural world to give us free energy and infinite resources.
Some people are going to get very, very rich off this, the rest of us? Survival, scrabbling, or gig economy as right now.
The theoreticists about alignment were doomed from the start, since in reality it was never going to work out how they hoped. 'Let's write heartful letters about the dangers of AI and the need to slow down research in order to avert the danger to humanity" - yes, and who signed off on that letter? One Mr. Sam Altman. I read a claim in a news article that this kind of 'support' by corporations etc. was all about positioning themselves as the ones to be first to market and making it difficult or impossible for smaller, newer start-ups to rival them, and not at all about the ostensible 'threat to humanity'.
And I think we see this working out in real time right now. The pro-safety faction within OpenAI moved against Altman, due (it is being speculated) to fears that he was too much on the "get a product out to market, and to heck with the cautious safety-first approach" train. This hit Microsoft's share price, and now Altman is back (for the moment, anyway) and it's a safe bet that OpenAI will now be moving ahead with enabling Microsoft to gain first mover advantage by having their pet AI widely available commercially.
OpenAI's real function, even if the idealists on the board didn't realise it, was to provide the necessary reassurance for the regulators and government: "yes indeed, we are ticking all the safety boxes, no worries!" That's why Altman scolded Toner for her paper; it didn't matter if it was only really followed by nerds, it was not doing her job which was to help sell OpenAI as the bestest safest no need for interference by the government while we develop the product.
I'm not sure I follow your logic here.
You don't take AI doomerism seriously because you think that AI doom is likely but through a different path than the 'paperclip maximizer'? I'm pretty certain that the AI safety crowd are just as worried about manipulative oracle AIs as they are about mindless paperclip maximizers.
I think the problem is not machines, but people. And if people blindly put their trust in "the machine output must be correct" because they have visions of dollar signs, then we're screwed, but it's not the AI that decided to screw us, it's the people who were doing things in accordance with what the AI said.
Right now, OpenAI - which has its lovely charter about safety and so forth - is trashed. It wasn't their AI that suddenly woke up and became an agent with goals of its own that did it, it was good old-fashioned greed. Getting rid of Altman was supposed to slow down the adoption of unaligned or insufficiently aligned AI. It was perceived as costing money, and so money won in the end.
Sounds like you still agree with us doomers? We don't expect human greed / competitive pressures to go away any time soon, which is why we're worried about exactly the kinds of money-winning scenarios you propose.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
How exactly are the coastal PMC types going to get rich in a way that doesn't enrich the rest of us?
If AGI can manufacture goods much more cheaply, then that means cheap goods for everyone. If AGI can provide services for zero to low cost, that means cheap or free services for everyone.
While there are situations where individuals can get rich at the expense of the masses through rent-seeking (I'm thinking someone like Carlos Slim monopolising Mexican telecoms) the overwhelming majority of billionaires got that way by providing something useful to the masses. Elon Musk sold luxury electric cars, Jeff Bezos provided an online retail experience far superior to anything that came before it, Steve Jobs sold consumer-friendly, well-designed electronics.
If Sam Altman ends up a trillionaire, how exactly could that leave the rest of us poorer?
Bezos, Musk, etc. have fortunes but that money is not making its way to me. I have even less reason to think that Altman as a trillionaire is going to mean the lake fisherman in Tanzania suddenly getting thousands of dollars extra per year as a wage. Cheap goods/services for everyone is a nice-sounding idea, but it relies on "I have enough money as disposable income to purchase those goods/services". If I lose my job because the company replaced me with AI, it doesn't matter how 'cheap' the next model iPhone is because now it's made by AI, I'm not going to be buying one.
To the degree that they have cash it is making it's way to you about as much as any other cash, as for their wealth, that they created, they did so by creating lots of utility. I can order something online and have it the very same day. That's awesome. Thanks Jeff. You deserve all that money for doing something so awesome. You earned it! Thanks Elon for the cool cars and internet!
These are things people want (we know this because people pay them for these things).
More options
Context Copy link
If you've ever used Amazon then you have benefitted from Bezos' success. You've benefitted from the consumer surplus generated by Amazon's existence. Whether that is from cheaper goods, faster delivery, greater choice, more convenience, the fact that you've used the website demonstrates that you've derived value from doing so relative to what else was available. The same goes for any other company that you've ever interacted with.
And if everyone's jobs get replaced by AI without any financial recompense, then nobody will have any money to spend on these companies that have done the job-replacing. They would need to compete with eachother for what small purchasing power remains, which means lowering prices to near-zero. This is easy enough when your labour costs have been reduced to zero by the AI that took everyone's jobs.
AI represents a potential increase in productivity, and increasing productivity is literally what economic growth is. From the industrial revolution to now, increasing productivity is why we were able to escape the zero-sum world that existed before.
Whether it destroys the world is another thing, of course.
Just because there are no humans in the loop purchasing or selling goods and services doesn't mean that companies are out of luck, they'll merely sell to each other, with a fully automated economy akin to a Disneyland with no children.
Automated Tesla sells electric vehicles and batteries to companies providing transportation to automated mining companies that sell ores to automated refining and manufacturing companies that sell it to someone else. There doesn't need to be any humans involved anywhere, barring those who own a stake in such entities, and the loss of human purchasing power from automation will mean fuck-all.
More options
Context Copy link
Why would they need to sell goods to people with no purchasing power... or combat power considering being a soldier is also a job? Economics ends if there is no scarcity.
Why wouldn't they build giant theme parks for themselves, or clone and cater to themselves, or run off to explore space, or have fun in VR in a giant underground fortress guarded by robots? I believe that power corrupts, that absolute power corrupts absolutely. A world where one or a few men control all wealth and power is not going to be good for those without wealth or power.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I can think of a few ways.
Fast and constant inflation absorbing the productivity gains of technology into asset prices.
AI making society super productive but a loaf of bread being 10 bucks and only the richest being able afford land. You'll own nothing and you'll be happy with UBI in exchange of guarantees of control, which is the model of Altman's other venture, Worldcoin.
A rising tide lifts all boats in a free market. We do not live in one.
If the current marginal cost of production for a loaf of bread is about $2 (just looked at the website of the closest grocery store to my current location), and AI makes society super productive, do you think the real marginal cost of production for a loaf of bread will be (a) Less than $2, (b) About $2, (c) Greater than $2, but less than $10, (d) About $10, or (e) Greater than $10?
If you chose one of (a-c), why do you think that the price of bread will not trend toward the marginal cost of production, as is the standard result in economics for goods like bread?
If you chose one of (b-e), why do you think that an across-the-board increase in productivity will not reduce the marginal cost of production of bread?
More options
Context Copy link
This appears to be pro-gold/pro-bitcoin. But in a lot of those graphs, you can just as easily pick ‘81, then you have the sinking interest rates as the nice correlation. The fed ordered that assets be more expensive for 40 years, and people wonder why labour isn’t getting its share.
I'm aware there's also that narrative going, just provided the Austrian side (which would actually agree with that assessment of the Fed's policy) but Marxists are also quick to point out that productivity gains don't go to the workers but to capital. They point to different causes, but they too have a possible story for capitalism not lowering inequality. With or without using surplus value as a framework.
In any case I don't think the assumption that productivity gains make everyone wealthy necessarily is warranted.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
This is moving the goalposts to a distant planet.
I've heard before the criticism of "AI doom is not certain, therefore we shouldn't worry about it". I've never heard before "One type of AI doom is less likely than another type, therefore we can't take people who worry seriously".
You're missing my point. AI doomerism is about a certain undesirable outcome. Meanwhile, in reality, the rug is being pulled out from under them in a way that has nothing to do with "what if we develop an AGI that becomes self-aware and recursively boosts its intelligence to god-tier levels, without being aligned with liberal West coast values?" but rather "what if you threaten our return on investment, that means we must act to stop you".
The power has now switched from "the people who worry about alignment" to "the people who can guarantee a product to market". So if we are going to be doomed by AGI, the alignment people were working on the wrong problem all along - they needed to be converting their "profit first" colleagues and investors to "safety first" and it's pretty clear they didn't see it coming until too late, and when they tried to avert that by firing Altman, now they're the ones being shown the door.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link