site banner

Culture War Roundup for the week of March 27, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

11
Jump in the discussion.

No email address required.

Gary Marcus

Wait, what? Wasn’t his shtick that GPT, DALL-E, etc are very stupid and not worth much? That there is no genuine intelligence there because it cannot draw a horse riding an astronaut, or solve some simple logic puzzle? Now he is so concerned about the capabilities that he wants a moratorium? Is there some sort of post somewhere where he explains why he got it so wrong?

Anything I have ever seen from Gary Marcus suggests to me that on the issue of AI he is simply on the side of "against". He doesn't like it, it's stupid and also probably dangerous.

He has a series of notes on how he's right. E.g. AI risk ≠ AGI risk

My beliefs have not in fact changed. I still don’t think large language models have much to do with superintelligence or artificial general intelligence [AGI]; I still think, with Yann LeCun, that LLMs are an “off-ramp” on the road to AGI. And my scenarios for doom are perhaps not the same as Hinton’s or Musk’s; theirs (from what I can tell) seem to center mainly around what happens if computers rapidly and radically self-improve themselves, which I don’t see as an immediate possibility.

But here’s the thing: although a lot of the literature equates artificial intelligence risk with the risk of superintelligence or artificial general intelligence, you don’t have to be superintelligent to create serious problems. I am not worried, immediately, about “AGI risk” (the risk of superintelligent machines beyond our control), in the near term I am worried about what I will call “MAI risk”—Mediocre AI that is unreliable (a la Bing and GPT-4) but widely deployed—both in terms of the sheer number of people using it, and in terms of the access that the software has to the world. …

Perhaps coupled with mass AI-generated propaganda, LLM-enhanced terrorism could in turn lead to nuclear war, or to the deliberate spread of pathogens worse than covid-19, etc. Many, many people could die; civilization could be utterly disrupted. Maybe humans would not literally be “wiped from the earth,” but things could get very bad indeed.

How likely is any of this? We have no earthly idea. My 1% number in the tweet was just a thought experiment. But it’s not 0%.

Hinton’s phrase — “it’s not inconceivable” — was exactly right, and I think it applies both to some of the long-term scenarios that people like Eliezer Yudkowsky have worried about, and some of the short-term scenarios that Europol and I have worried about.

The real issue is control.

Here's his first piece in this spirit that I've seen: Is it time to hit the pause button on AI?

An essay on technology and policy, co-authored with Canadian Parliament Member Michelle Rempel Garner.

It's crushingly unsurprising that a chronic bullshitter Marcus has grown concerned with misinformation and is teaming up with «policy» people to regulate this tech. Means don't matter. The real issue is control. For Marcus, the control of authority and academic prestige. For people behind Canadian MPs, actual political power.

I don't understand what it is that you don't understand. The fact that Person X thinks that no good will come of Thing Y should increase the likelihood that X wants Y banned. It's Person Z, who thinks some good might indeed come of Y, who has reasons to not want Y banned.

Thia is literally true given the way you phrased it, but "no good will come" is not the same thing as "it will work". It is possible to believe that something is worse because it won't work (if you think that good will come from it working) or that something is worse because it will work (if you think that working implies bad things).