site banner

Culture War Roundup for the week of November 14, 2022

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

12
Jump in the discussion.

No email address required.

None of that makes their claims wrong. Is this even a psychology thing? Things like 'existential risks to all life' do exist (for historical examples: atmosphere oxygenation "caused the extinction of many existing anaerobic species on Earth ... constituted a mass extinction", Chicxulub "a mass extinction of 75% of plant and animal species on Earth, including all non-avian dinosaurs.", ice ages, human population bottleneck), and technology makes it easy for us to do similarly bad things. Preventing such things is important, and it would be stupid to write them off. So this isn't even a psycholoical "hack", it's just ... something important that one can be wrong about. But it's not any more worth ignoring than it is to ignore the doctor saying you need surgery or you'll die, because faith healers say you need to get a protection spell or you'll die. That X-risk leads people to make difficult and complex decisions is good

Yes, all predictions of the apocalypse up until this very second have been wrong.

Yes, the next one might be right.

No, it won't be, but it might.

Sooner or later, it's either the big 'ol HDU or something else.

There is an absolutely unbroken line of apocalyptic dreamers who prefer their fantasy to dealing with the reality of the world. It goes back to the beginning of time when the first dipshit looked up at a shooting star and said "that means the gods are mad at us, we're all gonna die".

I'm doing a rationalist calculation. What are the odds that I meet the first Jeremiah in the history of the world to be right? I mean, a lot of people have lived a lot of lives and most of them thought the world was going to end in their lifetime. I'm pretty hot shit, but I doubt I'm lucky enough to even be alive at the same time as that guy, much less live in the same country, speak the same language, share enough personality quirks with to wind up in the same internet forums etc. What are the odds?

Pretty goddamned good, since I've lived through fourteen to twenty apocalypses (apocalypsi? apocalyps'?) to date. What a life. Fire, flood, the return of Christ, acid rain, Y2K, nuclear war (several times!), the Macarena, Global Warming and now AI? Bring it on, I say.

I gave several examples of literal apocalypses though

Again, it's entirely possible for massive technological change to make apocalypses possible. There clearly is a difference between 'god will fire lighting boom everyone for not being religious enough' and 'the billions of dollars and millions of man hours of the smartest people on the world are being invested in AI, what if it works'.

There is an absolutely unbroken line of apocalyptic dreamers who prefer their fantasy to dealing with the reality of the world

Given that EA spends more time and money on malaria nets than AI risk, this is clearly not an accurate statement about them. More generally, that doesn't actually make AI risk false.

Even if EA and lesswrong were - entirely - irrational and religious cults around AI risk, that wouldn't make AI risk false. And there are stupid, illogical cults around AI - there was and still is a lot of popular scifi larp about "the singularity", "mind uploading", etc. This doesn't make the AI go away.

I gave several examples of literal apocalypses though

Yeah, but those occurred on the rough timescale of once every billion years, and all prior to anything anywhere near humanity existing. The ratios on apocalypse-level events Humans have worried about to things that have actually happened during a timeframe that concerns us is therefore at least that high.

Technology has increased the rate of change of everything, though. a million years of hunter gathering, 100k years of fire, 10k years of agriculture, civilization ... 300 years of industry, 60 years of computers... If AI doesn't happen, what does happen in 10k years, and why hasn't AI taken power from humans yet?

I got raised on "we must avoid nuclear war, if it ever happens it will doom humanity, never mind the hundreds of millions killed by the bombs, the nuclear winter afterwards will mean we all starve and freeze to death in the dark". Nuclear war was the existential risk of the 60s-80s. This animated film frightened the life out of a generation of kids.

Now I'm reading "eh, nuclear war isn't that bad, sure it will kill a lot of people but not everyone, it will not be the end of civilisation much less humanity, and even nuclear winter was over-exaggerated".

What changed about nuclear bombs in the meantime? Nothing, so far as I can see, but the attitude around that risk has changed. Now it's climate change that is the existential risk of our time. AI can just take its place in the queue of "This time for sure, says Chicken Little" about the sky falling.

Nuclear risk is actually one of the EA Big Four - AI, Bio risk, Climate change, and Nuclear war. I have met far more people in EA that actually care about nuclear war than anywhere else.

The reason AI is getting more salience is because it's perceived as more neglected which is one of the key criteria of an EA cause area. Nuclear war is mainstream and nobody wants it, so it isn't as neglected as the other areas.

I'd like to note, though, that when I asked this question from the perspective you've mentioned, I got at least one reply to the effect of "you could still be boned." Nuclear war might not end all human civilization immediately, but even post-Cold-War media still tended to portray the post-nuclear world as pretty bleak even if there were still some people alive (after all, in such a world, the living might envy the dead). That is to say, even today, nuclear war could get pretty bad, we just have reasons/copium to believe the possibility space also contains scenarios that aren't "rubble and deserts everywhere."

Your argument goes like 'society was wrong and people lied about nuclear war, and used that to manipulate people - so that must be what's happening now'. Which ... sure, that can happen - it constantly happens - but the reason people even have a desire to avoid x-risks is because it is important to avoid disasters when disasters exist, and disasters sometimes exist.

This is like saying - "you're worried about high crime? people in the 1900s were worried about racialized crime destroying society, and they were wrong, and racist. Therefore crime doesn't matter". You can't write off the entire idea of 'bad thing happening' because people misuse the idea!

What I'm saying is, this is not my first rodeo. If all the x-risks that were sure-fire guaranteed gonna happen as prognosticated over the course of my life had even one of them happen, we'd be disposed of by now.

So "Oh no AI is gonna doom us unless!" talk is nothing special. "But AI is different" - yeah, well, so were nuclear weapons. It's not AI that is the risk, it's humans. We are the greatest threat to ourselves.

It's not AI that is the risk, it's humans

This doesn't make sense, humans are going to create the AI. If humans create super-smart AI and it kills us all [hyperbole], that's still humans being a threat to ourselves

Again, this is like saying "well, nobody I know has had their building collapse, so what do we need building codes for"? Or, if you insist on things that (at the time) didn't have examples - "meh, nobody's ever nuked another country / had a reactor meltdown before, what do we need nuclear strategy / regulation for"?

I gave several examples of literal apocalypses though

I did too. They all were considered possible, likely or certain by millions of geniuses in their day (except the silly ones, of course). They all had a reason why this time it was for real. They all happened, for some definition of "happen", and they all did not result in the end of the world, humanity, life or anything else so dire. I'm sure AI is dangerous. I'm sure we'll have some colossal fuckups with it that will probably damage something important. When this happens, the frenzy will begin in earnest. Timelines will be settled on, politics will change, a solution will be found, and we will learn to live with it, as we have with nuclear weapons.

Whichever generation of asshole eschatologists alive at the time will write a million books saying they averted the apocalypse. Ten seconds later, it will be something else, and everyone will forget about it. The End of the World is dead, long live the End of the World.

Maybe I'm wrong. Tell you what, if the world ends due to AI, I'll give you a million dollars.

Timelines will be settled on, politics will change, a solution will be found

Have you read any yud or lesswrong writing on AI safety? They put a lot of effort into addressing the exact concerns you've laid out, in a way that you don't seem to acknowledge. But leaving that - why though? Why will the competent people find a solution? It's clear how we are able to find a solution to, e.g. nuclear weapons, religious conflict, etc. But AI will be - it's argued - not a simple mechanism we can intelligence and coordinate around, but smart on its own. As a random example - what is the political solution to "AIs now control the global economy"? The AIs are going to be the one "finding solutions", not "human politicians". You can't psychologize your way around a gun to your head, and no amount of "you're just scared its ok the grownups will handle it" will physically prevent the complexity of AI!

Yes, yes, very smart people disagree with me. Your argument is to handwave me at the vast canon of AI scribbling? I've read the big ones, and they are as unconvincing as they are hysterical. It's a very specific style, one I recognize well from my upbringing in a millennial faith-healing cult. It all sounds very convincing, if you haven't been down this road before.

Your argument is to handwave me at the vast canon of AI scribbling

What I mean is that ... questions like "we've solved lots of problems before, AI will be fine" and "there will be a disaster, we will notice it, and then politicans will solve it" are things that people have written dozens of essays debating. It's like talking to someone here about race and genetics here and just saying "races aren't real. it's a distribution, not a category. and stereotyping is bigoted". Everyone here has heard that before, and hundreds of people have written up hundreds of paragraphs about why it's false. Maybe it's still true in some way, but that point is best made by addressing those arguments in some way, not just saying them.

I've read the big ones, and they are as unconvincing as they are hysterical.

Yeah, how so precisely? Again, it'd be much more interesting to read about why yud's arguments are wrong than "hurr its a cult you are being manipulated accept my social pressure instead of theirs"

The big danger is not AI, it's people who want to use AI to make a shit-ton of profit. The flip-side of the Fairy Godmother AI that will be so smart it will solve all the intractable problems that unaided humanity could never solve, and provide Fully Automated Luxury Gay Space Communism for all, is of course the Paperclip AI that will do away with us all (and that only if we're lucky, otherwise it's I Have No Mouth And I Must Scream for all).

(Side note here: congratulations, rationalists, you have managed to re-invent God and the Devil, Heaven and Hell, all over again even while wondering how anybody can possibly believe in religion's view of the afterlife).

So people who desperately want Fairy Godmother AI have to grapple with the possibility of Paperclip AI.

I don't accept the problem in those terms. I don't think we're ever going to get superhumanly smart AI that can be its own agent with its own goals any time near, if at all.

What I do think we will get is 'good enough' AI that private enterprises and governments will try exploiting just for that tiny edge. If trading fortunes can be made and lost on microsecond decisions, why not use your patented money-tree AI to make nanosecond decisions? Why not use AI for the social welfare and health care problems of triage that currently are being tried out by insurance companies about "ring up our hotline and some half-trained person running off a script will decide if you qualify to go see a doctor"?

We'll hand over decision making powers to dumb machines in order to make money, and we'll fuck ourselves up in the process. That's the risk, not a god-level intelligence AI deciding it wants to get rid of its monkey masters and tile the universe with NFTs.

Now, if you can solve the problem of "humans: we're still greedy, dumb monkeys fighting each other over who gets the bananas", then we won't have the problem of "uh-oh, we fucked up how our civilisation works". That's why I think "we must solve the problem of getting AI aligned with our morals/values!" is the wrong track to take; humans won't even align with our own morals/values, and besides, "fuck over that guy so I can get more bananas" is completely compatible with how we act morally/express our values, so if the AI aligns with that, why be surprised what results?

We'll hand over decision making powers to dumb machines in order to make money, and we'll fuck ourselves up in the process. That's the risk, not a god-level intelligence AI deciding it wants to get rid of its monkey masters and tile the universe with NFTs.

Yes, and handing over all the levers of society to very intelligent machines is bad. Why will the machines be dumb forever? Even if they are dumb at first, they'll become smarter, and quickly, because - see DL progress and theory of computation.

I don't think we're ever going to get superhumanly smart AI that can be its own agent with its own goals

Yeah, this is the main issue! Why? Even ignoring object-level arguments - look how rapidly technology has advanced over the past 200 years, does that just ... stop?

More comments