site banner

Culture War Roundup for the week of December 16, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

We can torture a computer terminal all day without causing the LLM it is connected to any distress. It's nonsense to talk about it having physical sensation. On the other hand, (to look at your question about the "meat-limit,") we can take a very simple organism, or one that likely does not have a consciousness, and it will respond instantly if we torture it.

This seems less like a philosophically significant matter of classification and more like a mere difference in function. The organism is controlled by an intelligence optimized to maneuver a physical body through an environment, and part of that optimization includes reactions to external damage.

Well, so what? We could optimize an AI to maneuver a little robot around an unknown environment indefinitely without it being destroyed, and part of that optimization would probably involve timely reaction to the perception of damage. Then you could jab it with a hot poker and watch it spin around, or what have you.

But again, so what? Optimizing an AI toward steering a robot around the environment doesn't make it any smarter or fundamentally more real, at least not in my view.

This seems less like a philosophically significant matter of classification and more like a mere difference in function.

Well sure. But I think we're less likely to reach good conclusions in philosophically significant matters of classification if we are confused about differences in function.

We could optimize an AI to maneuver a little robot around an unknown environment indefinitely without it being destroyed, and part of that optimization would probably involve timely reaction to the perception of damage. Then you could jab it with a hot poker and watch it spin around, or what have you.

And while such a device might not have qualia, it makes more sense (to me, anyway) to say that such an entity would have the ability to e.g. touch or see than an LLM.

But again, so what? Optimizing an AI toward steering a robot around the environment doesn't make it any smarter or fundamentally more real, at least not in my view.

In my view, the computer guidance section of the AIM-54 Phoenix long range air-to-air missile (fielded 1966) is fundamentally "more real" than the smartest GAI ever invented, but locked in an airgapped box and never interfacing with the outside world. The Phoenix made decisions that could kill you. AI's intelligence is relevant because it has impact on the real world, not because it happens to be intelligent.

But anyway, it's relevant right now because people are suggesting LLMs are conscious, or have solved the problem of consciousness. It's not conscious, or if it is, it's consciousness is a strange one with little bearing on our own, and it does not solve the question of qualia (or perception).

If you're asking if it's relevant or not if an AI is conscious when it's guiding a missile system to kill me - yeah I'd say it's mostly an intellectual curiosity at that point.