site banner

Culture War Roundup for the week of April 22, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

I fully understand that it would be nearly impossible for humans to control a superintelligent AI. I just don't care much about it. I don't have any children. If humanity was destroyed by superintelligent AI, my attitude to it would, aside from the obvious terror, also probably include some mirth. The lords of the known world, those who conquered all those other species, now destroyed by the same cold Darwinian logic of reality.

My point is that, while the Skynet scenario is definitely possible, the altruistic AI that loves humans scenario is also possible. There's no particular reason to think that a hyperintelligent AI would have the sort of incredibly hardwired "kill all opposition" motivation that we as humans have as a result of having evolved through billions of years of eat-or-be-eaten fighting. Of course AI, just like everything else in reality, is subject to natural selection, but there is no reason to think that AI would be subject to natural selection in a way that makes it violent in the ways that us humans are violent.

"the altruistic AI that loves humans scenario is also possible."

It is not realistically possible. It would be like firing a very powerful rocket into the air and having it land on a specific crater on the moon with no guidance system or understanding of orbital mechanics. Even if you try to "point" the rocket, it's just not going to happen.

You're thinking that AI might have some baseline similarity to human values that would make it benevolent by chance or by our design. I disagree. EY touches on why this is unlikely here:

https://intelligence.org/2016/03/02/john-horgan-interviews-eliezer-yudkowsky/

It's not a full explanation, but I have work I should be getting back to. If someone else wants to write more than they can. There are probably some Robert Miles videos on why AI won't be benevolent by luck.

Here's one:

https://youtube.com/watch?v=ZeecOKBus3Q

I'm not going to watch it again to check but it will probably answer some of your questions about why people think AI won't be benevolent through random chance (or why we aren't close to being skilled enough to make it benevolent not by chance). Other videos on his channel may also be relevant.

It is not realistically possible. It would be like firing a very powerful rocket into the air and having it land on a specific crater on the moon with no guidance system or understanding of orbital mechanics. Even if you try to "point" the rocket, it's just not going to happen.

Oh bullshit. Intelligent agents co-align. That is they modify themselves and one another to be more aligned with one another. It's not a rocket that has to be perfectly aimed, it's a billion rockets with rubberbanding.