site banner

Culture War Roundup for the week of December 16, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

4
Jump in the discussion.

No email address required.

Not to be that person, but how exactly is that different from a brain? I mean the brain itself feels nothing, the sensations are interpreted from data from the nerves, the brain doesn’t experience pain. So do you have the qualia of pain, and if so, how is what’s happening between your body and your brain different from an LLM taking in data from any sort of input? If I program the thing to avoid a certain input from a peripheral, how is that different from pain?

I think this is the big question of these intelligent agents. We seem to be pretty certain that current models don’t have consciousness or experience qualia, but I’m not sure that this would always be true, nor can I think of a foolproof way to tell the difference between an intelligent robot that senses that an arm is broken and seeks help and a human child seeking help for a skinned knee. Or a human experience of embarrassment for a wrong answer and an LLM given negative feedback and avoiding that negative feedback in the future.

I think it’s fundamentally important to get this right because consciousness comes with humans beginning to care about the welfare of things that experience consciousness in ways that we don’t for mere objects. At higher levels we grant them rights. I don’t know what the consequences of treating a conscious being as an object would be, but at least historical examples seem pretty negative.

how exactly is that different from a brain? I mean the brain itself feels nothing, the sensations are interpreted from data from the nerves, the brain doesn’t experience pain

I experience pain. The qualia is what I experience. To what degree the brain does or doesn't experience pain is probably open to discussion (preferably by someone smarter than me). Obviously if you cut my head off and extract my brain it will no longer experience pain. But on the other hand if you measured its behavior during that process - assuming your executioner was at least somewhat incompetent, anyway - you would see the brain change in response to the stimuli. And again a rattlesnake (or rather the headless body of one) seems to experience pain without being conscious. I presume there's nothing experiencing anything in the sense that the rattlesnake's head is detached from the body, which is experiencing pain, but I also presume that an analysis of the body would show firing neurons just as is the case with my brain if you fumbled lopping my head off.

(Really, I think the entire idea we have where the brain is sort of separate from the human body is wrong, the brain is part of a contiguous whole, but that's an aside.)

how is what’s happening between your body and your brain different from an LLM taking in data from any sort of input

Well, it's fundamentally different because the brain is not a computer, neurons are more complex than bits, the brain is not only interfacing with electrical signals via neurons but also hormones, so the types of data it is receiving is fundamentally different in nature, probably lots of other stuff I don't know. Look at it this way: supposing we were intelligent LLMs, and an alien spacecraft manned by organic humans crashed on our planet. We wouldn't be able to look at the brain and go "ah OK this is an organic binary computer, the neurons are bits, here's the memory core." We'd need to invent neuroscience (which is still pretty unclear on how the brain works) from the ground up to understand how the brain worked.

Or, for another analogy, compare the SCR-720 with the AN/APG-85. Both of them are radars that work by providing the pilot with data based on a pulse of radar. But the SCR-720 doesn't use software and is a mechanical array, while the APG-85 is an electronically scanned array that uses software to interpret the return and provide the data to the pilot. If you were familiar with the APG-85 and someone asked you to reverse-engineer a radar, you'd want to crack open the computer to access the software. But if you started there on an SCR-720 you'd be barking up the wrong tree.

Or a human experience of embarrassment for a wrong answer and an LLM given negative feedback and avoiding that negative feedback in the future.

I mean - I deny that an LLM can flush. So while an LLM and a human may both convey messages indicating distress and embarrassment, the LLM simply cannot physically have the human experience of embarrassment. Nor does it have any sort of stress hormone. Now, we know that, for humans, emotional regulation is tied up with hormonal regulation. It seems unlikely that anything without e.g. adrenaline (or bones or muscles or mortality) can experience fear like ours, for instance. We know that if you destroy the amygdala on a human, it's possible to largely obliterate their ability to feel fear, or if you block the ability of the amygdala to bind with stress hormones, it will reduce stress. An LLM has no amygdala and no stress hormones.

Grant for the sake of argument a subjective experience to a computer - it's experience is probably one that is fundamentally alien to us.

I think it’s fundamentally important to get this right because consciousness comes with humans beginning to care about the welfare of things that experience consciousness in ways that we don’t for mere objects. At higher levels we grant them rights. I don’t know what the consequences of treating a conscious being as an object would be, but at least historical examples seem pretty negative.

"Treating like an object" is I guess open to interpretation, but I think that animals generally are conscious and humans, as I understand it, wouldn't really exist today in anything like our current form if we didn't eat copious amounts of animals. So I would suggest the historical examples are on net not only positive but necessary, if by "treating like an object" you mean "utilizing."

However, just as the analogy of the computer is dangerous, I think, when reasoning about the brain, I think it's probably also dangerous to analogize LLMs to critters. Humans and all animals were created by the hand of a perfect God and/or the long and rigorous tutelage of natural selection. LLMs are being created by man, and it seems quite likely that they'll care about [functionally] anything we want them to, or nothing, if we prefer it that way. So they'll be selected for different and possibly far sillier things, and their relationship to us will be very different than any creature we coexist with. Domesticated creatures (cows, dogs, sheep, etc.) might be the closest analogy.

Of course, you see people trying to breed back aurochs, too.