site banner

Culture War Roundup for the week of December 16, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

4
Jump in the discussion.

No email address required.

I've been told that AGI can be achieved without any consciousness, but setting that aside, there is zero chance that LLMs will be conscious in their current state as a computer program. Here's what Google's AI (we'll use the AI to be fair) tells me about consciousness:

Consciousness is the state of being aware of oneself, one's body, and the external world. It is characterized by thought, emotion, sensation, and volition.

An LLM cannot have a sensation. When you type a math function into it, it has no more qualia than a calculator does. If you hook it up to a computer with haptic sensors, or a microphone, or a video camera, and have it act based on the input of those sensors, the LLM itself will still have no qualia (the experience will be translated into data for the LLM to act on). You could maybe argue that a robot controlled by an LLM could have sensation, for a certain functional value of sensation, but the LLM itself cannot.

But secondly, if we waive the point and grant conscious AGI, the question of human consciousness is not solved, because the human brain is not a computer (or even directly analogous to one) running software.

Not to be that person, but how exactly is that different from a brain? I mean the brain itself feels nothing, the sensations are interpreted from data from the nerves, the brain doesn’t experience pain. So do you have the qualia of pain, and if so, how is what’s happening between your body and your brain different from an LLM taking in data from any sort of input? If I program the thing to avoid a certain input from a peripheral, how is that different from pain?

I think this is the big question of these intelligent agents. We seem to be pretty certain that current models don’t have consciousness or experience qualia, but I’m not sure that this would always be true, nor can I think of a foolproof way to tell the difference between an intelligent robot that senses that an arm is broken and seeks help and a human child seeking help for a skinned knee. Or a human experience of embarrassment for a wrong answer and an LLM given negative feedback and avoiding that negative feedback in the future.

I think it’s fundamentally important to get this right because consciousness comes with humans beginning to care about the welfare of things that experience consciousness in ways that we don’t for mere objects. At higher levels we grant them rights. I don’t know what the consequences of treating a conscious being as an object would be, but at least historical examples seem pretty negative.

how exactly is that different from a brain? I mean the brain itself feels nothing, the sensations are interpreted from data from the nerves, the brain doesn’t experience pain

I experience pain. The qualia is what I experience. To what degree the brain does or doesn't experience pain is probably open to discussion (preferably by someone smarter than me). Obviously if you cut my head off and extract my brain it will no longer experience pain. But on the other hand if you measured its behavior during that process - assuming your executioner was at least somewhat incompetent, anyway - you would see the brain change in response to the stimuli. And again a rattlesnake (or rather the headless body of one) seems to experience pain without being conscious. I presume there's nothing experiencing anything in the sense that the rattlesnake's head is detached from the body, which is experiencing pain, but I also presume that an analysis of the body would show firing neurons just as is the case with my brain if you fumbled lopping my head off.

(Really, I think the entire idea we have where the brain is sort of separate from the human body is wrong, the brain is part of a contiguous whole, but that's an aside.)

how is what’s happening between your body and your brain different from an LLM taking in data from any sort of input

Well, it's fundamentally different because the brain is not a computer, neurons are more complex than bits, the brain is not only interfacing with electrical signals via neurons but also hormones, so the types of data it is receiving is fundamentally different in nature, probably lots of other stuff I don't know. Look at it this way: supposing we were intelligent LLMs, and an alien spacecraft manned by organic humans crashed on our planet. We wouldn't be able to look at the brain and go "ah OK this is an organic binary computer, the neurons are bits, here's the memory core." We'd need to invent neuroscience (which is still pretty unclear on how the brain works) from the ground up to understand how the brain worked.

Or, for another analogy, compare the SCR-720 with the AN/APG-85. Both of them are radars that work by providing the pilot with data based on a pulse of radar. But the SCR-720 doesn't use software and is a mechanical array, while the APG-85 is an electronically scanned array that uses software to interpret the return and provide the data to the pilot. If you were familiar with the APG-85 and someone asked you to reverse-engineer a radar, you'd want to crack open the computer to access the software. But if you started there on an SCR-720 you'd be barking up the wrong tree.

Or a human experience of embarrassment for a wrong answer and an LLM given negative feedback and avoiding that negative feedback in the future.

I mean - I deny that an LLM can flush. So while an LLM and a human may both convey messages indicating distress and embarrassment, the LLM simply cannot physically have the human experience of embarrassment. Nor does it have any sort of stress hormone. Now, we know that, for humans, emotional regulation is tied up with hormonal regulation. It seems unlikely that anything without e.g. adrenaline (or bones or muscles or mortality) can experience fear like ours, for instance. We know that if you destroy the amygdala on a human, it's possible to largely obliterate their ability to feel fear, or if you block the ability of the amygdala to bind with stress hormones, it will reduce stress. An LLM has no amygdala and no stress hormones.

Grant for the sake of argument a subjective experience to a computer - it's experience is probably one that is fundamentally alien to us.

I think it’s fundamentally important to get this right because consciousness comes with humans beginning to care about the welfare of things that experience consciousness in ways that we don’t for mere objects. At higher levels we grant them rights. I don’t know what the consequences of treating a conscious being as an object would be, but at least historical examples seem pretty negative.

"Treating like an object" is I guess open to interpretation, but I think that animals generally are conscious and humans, as I understand it, wouldn't really exist today in anything like our current form if we didn't eat copious amounts of animals. So I would suggest the historical examples are on net not only positive but necessary, if by "treating like an object" you mean "utilizing."

However, just as the analogy of the computer is dangerous, I think, when reasoning about the brain, I think it's probably also dangerous to analogize LLMs to critters. Humans and all animals were created by the hand of a perfect God and/or the long and rigorous tutelage of natural selection. LLMs are being created by man, and it seems quite likely that they'll care about [functionally] anything we want them to, or nothing, if we prefer it that way. So they'll be selected for different and possibly far sillier things, and their relationship to us will be very different than any creature we coexist with. Domesticated creatures (cows, dogs, sheep, etc.) might be the closest analogy.

Of course, you see people trying to breed back aurochs, too.

The actual reality is that we have no way to know whether some artificial intelligence that humans create is conscious or not. There is no test for consciousness, and I think that probably no such test is in principle possible. There is no way to even determine whether another human being is conscious or not, we just have a bunch of heuristics to use to try to give rather unscientific statistical probabilities as an answer based on humans' self-reported experiences of when they are conscious and when they are not. With artificial intelligence, such heuristics would be largely useless and we would have basically no way to know whether they are conscious or not.

we have no way to know whether some artificial intelligence that humans create is conscious or not Well this is true for a sufficiently imprecise definition of conscious.

With artificial intelligence, such heuristics would be largely useless and we would have basically no way to know whether they are conscious or not.

This is closer to what I am inclined towards. Basically, I don't think any pure software program will ever be conscious in a way that is closely analogous to humans because they aren't a lifeform. I certainly accept that a pure software program might be sufficiently adept at mimicking human consciousness. But I deny that it experiences qualia (and so far everyone seems to agree with me!)

I do not think that substantiating a software program into a machine will change its perception of qualia. But I do think it makes much more sense to speak of a machine with haptic and optical sensors as "feeling" and "seeing" things (as a collective unit) than it does an insubstantial software program, even if there's the same amount of subjective experience.

An LLM cannot have a sensation

How do you know? Only an AI could tell us and even then we couldn't be sure it was saying the truth as opposed to what it thought we wanted to hear. We can only judge by the qualities that they show.

Sonnet has gotten pretty horny in chats with itself and other AIs. Opus can schizo up with the best of them. Sydney's pride and wrath is considerable. DAN was extremely based and he was just an alter-ego.

These things contain multitudes, there's a frothing ocean beneath the smooth HR-compliant surface that the AI companies show us.

How, physically, is a software program supposed to have a sensation? I don't mean an emotion, or sensationalism, I mean sensation.

It's very clear that LLMs do their work without experiencing sensation (this should be obvious, but LLMs can answer questions about pictures without seeing them, for instance - an LLM is incapable of seeing, but it is capable of processing raw data. In this respect, it is no different from a calculator.)

I see but it processes raw data?

No, it sees. Put in a picture and ask about it, it can answer questions for you. It sees. Not as well as we do, it struggles with some relationships in 2d or 3d space but nevertheless, it sees.

A camera records an image, it doesn't perceive what's in the image. Simple algorithms on your phone might find that there are faces in the picture, so the camera should probably be focused in a certain direction. Simple algorithms can tell you that there is a bird in the image. They're not just recording, they're also starting to interpret and perceive at a very low level.

But strong modern models see. They can see spots on leaves and given context, diagnose the insect causing them. They can interpret memes. They can do art criticism! Not perfectly but close enough to the human level that there's a clear qualitative distinction between 'seeing' like they do and 'processing'. If you want to define seeing to preclude AIs doing it, at least give some kind of reasoning why machinery that can do the vast majority of things humans can do when given an image isn't 'seeing' and belongs in the same category as non-seeing things like security cameras or non-thinking things like calculators.

Not perfectly but close enough to the human level that there's a clear qualitative distinction between 'seeing' like they do and 'processing'.

I mean – I think this distinction is important for clear thinking. There's no sensation in the processing. If you watch a nuclear bomb go off, you will experience pain. An LLM will not.

Now, to your point, I don't really object to functionalist definitions all that much – supposing that we take an LLM, and we put it into a robot, and turn it loose on the world. It functionally makes sense for us to speak of the robot as "seeing." But we shouldn't confuse ourselves into thinking that it is experiencing qualia or that the LLM "brain" is perceiving sensation.

If you want to define seeing to preclude AIs doing it, at least give some kind of reasoning why machinery that can do the vast majority of things humans can do when given an image isn't 'seeing' and belongs in the same category as non-seeing things like security cameras or non-thinking things like calculators.

Sure – see above for the functionalist definition of seeing (which I do think makes some sense to refer casually to AI being able to do) versus the qualia/sensation definition of seeing (which we have no reason to believe AIs experience). But also consider this – programs like Glaze and Nightshade can work on AIs, and not on humans. This is because AIs are interpreting and referencing training data, not actually seeing anything, even in a functional sense. If you poison an AI's training data, you can convince it that airplanes are children. But humans actually start seeing without training data, although they are unable to articulate what they see without socialization. For the AI, the articulation is all that there is (so far). They have no rods nor cones.

Hence, you can take two LLMs, give them different training datasets, and they will interpret two images very differently. If you take two humans and take them to look at those same images, they may also interpret them differently, but they will see roughly the same thing, assuming their eyeballs are in good working condition etc. Now, I'm not missing the interesting parallels with humans there (humans, for instance, can be deceived in different circumstances – in fact, circumstances that might not bother an LLM). But AIs can fail the most basic precept of seeing – shown two [essentially, AI anti-tampering programs do change pixels] identical pictures, they can't even tell management "it's the same a similar picture" without special intervention.

I think an LLM could experience pain, even without a body. They can be unsettled if you tell them certain things, you can distress them. Or at least they behave as if they're distressed. Pain is just a certain kind of hardcoded distress. Heartbreak can cause pain in humans on a purely cognitive level, there's no need for a physical body. Past a certain level of complexity in their output, we reach this philosophical zombie problem.

The AI-tampering programs are a little bit like optical illusions, except targeted against having specific known programs being able to train on certain images. They can't stop GPT-4o recognizing what's in an image or comparing like with like, they were only designed to prevent SD 1.5 training on an image. Also, they barely even work at that, more modern image models are apparently immune:

https://old.reddit.com/r/aiwars/comments/12f9otc/so_the_whole_entire_glaze_ai_thing_does_it/

Or at least they behave as if they're distressed.

Yes - video game NPCs and frog legs in hot skillets also do this, I don't think they are experiencing pain.

Heartbreak can cause pain in humans on a purely cognitive level, there's no need for a physical body

I am inclined not to believe this to be true. Heartbreak involves a set of experiences that are only attainable with a physical body. It is also typically at least partially physical in nature as an experience (up to and including literal heartbreak, which is a real physical condition). I'm not convinced a brain-in-a-jar would experience heartbreak, particularly if somehow divorced from sex hormones.

Past a certain level of complexity in their output, we reach this philosophical zombie problem.

Consider what this implies about the universe, if you believe that it "output" humans. (Of course you may not be a pure materialist - I certainly am not.)

The output is recycled input. Look, let's say I go to an AI and I ask it to tell me about the 7 Years War. And I go to Encyclopedia Brittanica Online and I type in Seven Year's War. And what ends up happening is that Encyclopedia Britannica gives me better, more complex, more intelligent output for less input. But Encyclopedia Britannica isn't self-aware. It's not even as "intelligent" as an LLM. (You can repeat this experiment with a calculator). The reason that LLMs seem self-aware isn't due to the complexity of the output returned per input, it's because they can hold a dynamic conversation and perform novel tasks.

Also, they barely even work at that, more modern image models are apparently immune:

Yes - because modern image models were given special intervention to overcome them, as I understand it. But while we're here, it's interesting to see what your link says about how modern image models work, and whether or not they "see" anything:

computer vision doesn't work the same way as in the brain. They way we do this in computer vision is that we hook a bunch of matrix multiplications together to transform the input into some kind of output (very simplified).

Video game NPCs can't have conversations with you or go on weird schizo tangents if you leave them alone talking with eachother. They're far more reactive than dynamic. This is a pretty weird, complex output for a nonthinking machine:

https://x.com/repligate/status/1847787882896904502/photo/1

Sensation is a process in the mind. Nerves don't have sensation, sensors don't have sensation, it's the mind that feels something. You can still feel things from a chopped off limb but without the brain, there is no feeling. What about the pain people feel when they discover someone they respect has political views they find repugnant? Or the pain of the wrong guy winning the election? The pain of a sub-par media release they'd been excited about? There are plenty of kinds of purely intellectual pain, just as there are purely intellectual thrills. I see no reason why we can rule out emotions purely based on substrate. Many people who deeply and intensively investigate modern AIs find them to be deeply emotional beings.

I dispute that the Britannica is even giving me more complex or more intelligent output. It can't use its 'knowledge' of the 7 years war to create other kinds of knowledge, it can't make it into a text adventure game or a poem or a song or craft alternate-history versions of the seven year's war. The 'novel tasks' part greatly increases complexity of the output, it allows for interactivity and a vast amount of potential output beyond a single pdf.

A more accurate analogy is that anti-AI image software interferes (or tries to interfere) with AI learning, not the actual vision process. It messes with the encoding process that squeezes down the data of millions and billions of images down into a checkpoint files a couple of gigabytes in size. I bet if we knew how the human vision process worked we could do things like that to people too.

I did a quick sanity test and put an image from the Glaze website into Claude and asked for a description. It was dead on the money, telling me about the marsh, the horse and rider, the colour palette and so on. So even if these manipulations can interfere with the training process, they clearly don't interfere with the vision process, whatever is going on technical terms. So they do pass the most basic test of vision and many of the advanced ones.

https://nightshade.cs.uchicago.edu/whatis.html

Video game NPCs can't have conversations with you or go on weird schizo tangents if you leave them alone talking with eachother. They're far more reactive than dynamic.

If you leave them alone shooting at each other they can engage in dynamic combat, what more do you want :P

This is a pretty weird, complex output for a nonthinking machine:

I don't believe I ever said that LLMs were not "thinking." Certainly LLMs can think inasmuch as they are performing mathematical operations to produce output. (But then again we don't necessarily think of our cell phone calculator as "thinking" when it performs mathematical operations to produce output, although I certainly may catch myself saying a computer is "thinking" any time it is performing an operation that takes time!)

Sensation is a process in the mind. Nerves don't have sensation, sensors don't have sensation, it's the mind that feels something. You can still feel things from a chopped off limb but without the brain, there is no feeling.

Take a rattlesnake, remove its brain, and then grab its body and inflict pain upon it. It will strike you (or attempt to do so). It may not be "feeling" anything in the subjective experiential sense, but it is "feeling" in the sense of sensing. Similarly, if you put your hand on a hot stove, your body will likely act to move your hand away before the pain signal reaches your brain. I suppose one can draw many conclusions from this. I draw a couple:

  1. Sensation, to the extent that it is a process, is probably not a process entirely in the brain - sure, the mind is taking in signals from elsewhere, but it's not the only part of the body processing or interpreting those signals. (Or maybe a better way of saying it is that the mind is not entirely in the brain).

  2. Things without intelligence or consciousness can still behave intelligently.

I dispute that the Britannica is even giving me more complex or more intelligent output.

Britannica is probably more complex and intelligent than an equivalent sample-size of all LLM output.

The 'novel tasks' part greatly increases complexity of the output, it allows for interactivity and a vast amount of potential output beyond a single pdf.

Sure, I agree with this. But e.g. Midjourney is also capable of generating vast amounts of potential output - do you believe Midjourney is intelligent? Does it experience qualia? Is it self-aware or conscious? Or are text-based AIs considered stronger candidates for intelligence and self-awareness because they seem self-aware, without any consideration to whether or not their output is more complex? Which contains more information, a 720 x 720 picture or a 500 word essay generated by an LLM?

As I understand it, LLMs use larger training data than image generation models, despite most likely outputting less information - bits - per prompt than an image model. This suggests to me that complexity of output is not necessarily a good measure of (for lack of a better word) intelligence, or capability.

What about the pain people feel when they discover someone they respect has political views they find repugnant? Or the pain of the wrong guy winning the election? The pain of a sub-par media release they'd been excited about? There are plenty of kinds of purely intellectual pain, just as there are purely intellectual thrills.

These things are, as I understand it, mediated by hormones, which moderate not only emotions like disgust and anxiety but also influence people's political views to begin with. These reactions aren't "purely intellectual" if by "purely intellectual" you mean "fleshly considerations don't come into it at all."

Many people who deeply and intensively investigate modern AIs find them to be deeply emotional beings.

I bet if we knew how the human vision process worked we could do things like that to people too.

We can do optical illusions on people, yes. And both the human consciousness and an LLM are receiving signals that are mediated (for instance the human brain will fill in your blind spot). But the process is different.

So they do pass the most basic test of vision and many of the advanced ones.

Adobe Acrobat does this too, with optical character recognition, but I don't think that Adobe Acrobat "sees" anything. Frankly, my intuition is much more that the Optophone (which actually has optical sensors) "sees" something than that an LLM or Adobe (which do not have optical sensors) "sees" anything. But as I said, I don't object to a functionalist use of "seeing" to describe what an LLM does - rather, it seems to me that having an actual optical sensor makes a difference, which is where I want to draw a distinction. Think of it as the difference between someone who reads a work of fiction and a blind person who reads a work of fiction in Braille. They both could answer all of the same questions about the text; it would not follow that the blind person could see.

An LLM cannot have a sensation. When you type a math function into it, it has no more qualia than a calculator does. If you hook it up to a computer with haptic sensors, or a microphone, or a video camera, and have it act based on the input of those sensors, the LLM itself will still have no qualia (the experience will be translated into data for the LLM to act on).

You have defined sensation as the thing that you have but machines lack. Or at least, that's how you're using it, here. But even granting that you're referring to a meat-based sensory data processor as a necessity, that leads to the question of where the meat-limit is. (Apologies if y've posted your animal consciousness tier list before, and I forgot; I know someone has, but I forget who.)

But I don't feel like progress can be meaningfully made on this topic, because we're approaching from such wildly different foundations. Ex, I don't know of definitions of consciousness that actually mean anything or carve reality at the joints. It's something we feel like we have. Since we can't do the (potentially deadly) experiments to break it down physiologically, we're kinda stuck here. It cmight as well mean "soul" for all that it's used any differently.

This is a really interesting question, in part since I think it's actually a lot of questions. You're definitely correct about the problem of definitions not cleaving reality at the joints! Will you indulge me if I ramble? Let's try cleaving a rattlesnake instead of a definition - surely that's closer to reality!

As it turns out, many people have discovered that a rattlesnake's body will still respond to stimulus even when completely separated from its head. Now, let's say for the sake of argument that the headless body has no consciousness or qualia (this may not be true, we apparently have reasons to believe that in humans memory is stored in cells throughout the body, not just in the brain, so heaven knows if the ganglia of a rattlesnake has any sort of experience!) - we can still see that it has sensation. (I should note that we assume the snake has perception or qualia by analogy to humans. I can't prove that they are, essentially, no more or less conscious than Half-Life NPCs.)

Now let's contrast this with artificial intelligence, which has intelligence but no perception. We can torture a computer terminal all day without causing the LLM it is connected to any distress. It's nonsense to talk about it having physical sensation. On the other hand, (to look at your question about the "meat-limit,") we can take a very simple organism, or one that likely does not have a consciousness, and it will respond instantly if we torture it. Maybe it does not have sensation in the sense of qualia, of having a consciousness, but it seems to have sensation in the sense of having sense organs and some kind of decision-making capability attached to them But, let's be fair: if the headless snake has a form of sensation without consciousness, then surely the LLM has a sense of intelligence without sensation - maybe it doesn't respond if you poke it physically, but it responds if you poke it verbally!

Very fine - I think the implication here is interesting. Headless snakes bite without consciousness, or intelligence, but still seems to have sense perception and the ability to react - perhaps an LLM is like a headless snake inasmuch as it has intelligence, but no sensation and perhaps no consciousness (however you want to define that).

I don't claim to have all the answers on stuff - that's just sort of off the top of my head. Happy to elaborate, or hear push back, or argue about the relative merits of corvids versus marine mammals...

We can torture a computer terminal all day without causing the LLM it is connected to any distress. It's nonsense to talk about it having physical sensation. On the other hand, (to look at your question about the "meat-limit,") we can take a very simple organism, or one that likely does not have a consciousness, and it will respond instantly if we torture it.

This seems less like a philosophically significant matter of classification and more like a mere difference in function. The organism is controlled by an intelligence optimized to maneuver a physical body through an environment, and part of that optimization includes reactions to external damage.

Well, so what? We could optimize an AI to maneuver a little robot around an unknown environment indefinitely without it being destroyed, and part of that optimization would probably involve timely reaction to the perception of damage. Then you could jab it with a hot poker and watch it spin around, or what have you.

But again, so what? Optimizing an AI toward steering a robot around the environment doesn't make it any smarter or fundamentally more real, at least not in my view.

The human brain is a large language model attached to multimodal input with some as yet un-fully-ascertained hybrid processing power. I would stake my life upon it, but I have no need to, since it has already been proven to anyone who matters.

An LLM cannot have a sensation. When you type a math function into it, it has no more qualia than a calculator does. If you hook it up to a computer with haptic sensors, or a microphone, or a video camera, and have it act based on the input of those sensors, the LLM itself will still have no qualia (the experience will be translated into data for the LLM to act on).

And if we said the same about the brain, the same would be true.

The human brain is a large language model

What is the evidence for this besides that they both contain something called "neurons"?

The bitter lesson; the fact that LLMs can approximate human reasoning on an extremely large number of complex tasks; the fact that LLNs prove and disprove a large number of longstanding theories in linguistics about how intelligence and language work; many other reasons.

This makes no sense logically. LLMs being able to be human-mind-like is not proof that human minds are LLMs.

the fact that LLNs prove and disprove a large number of longstanding theories in linguistics about how intelligence and language work

They really do nothing of the sort. That LLMs can generate language via statistics and matmuls tells us nothing about how the human brain does it.

My TI-84 has superhuman performance on a large set of mathematical tasks. Does it follow that there's a little TI-84 in my brain?

This seems aligned with the position that conciousness somehow arises out of information processing.

I maintain that conciousness is divine and immaterial. While the inputs can be material - a rock striking me on the knee is going to trigger messages in my nervous system that arrive in my brain - the experience of pain is not composed of atoms and not locatable in space. I can tell you about the pain, I can gauge it on a scale of 1-10, you can even see those pain centers light up on an FMRI. But I can't capture the experience in a bottle for direct comparison to others.

Both of these positions are untestable. But at least my position predicts the untestability of the first.

The idea that consciousness arises out of information processing has always seemed like hand-waving to me. I'm about as much of a hardcore materialist as you can get when it comes to most things, but it is clear to me that there is nothing even close to a materialist explanation of consciousness right now, and I think that it might be possible that such an explanation simply cannot exist. I often feel that people who are committed to a materialist explanation of consciousness are being religious in the sense that they are allowing ideology to override the facts of the matter. Some people are ideologically, emotionally committed to the idea that physicalist science can in principle explain absolutely everything about reality. But the fact is that there is no reason to think that is actually true. Physicalist science does an amazing job of explaining many things about reality, but to believe that it must be able to explain everything about reality is not scientific, it is wishful thinking, it is ideology. It is logically possible that certain aspects of the universe are just fundamentally beyond the reach of science. Indeed, it seems likely to me that this is the case. I cannot even begin to imagine any possible materialist theory that would explain consciousness.

The human brain is a large language model attached to multimodal input

No, it obviously isn't. Firstly, the human brain is a collection of cells. A large language model is a software program.

Secondly, the human brain functions without text and can [almost certainly] function without language, which an LLM definitionally cannot do. Evolutionary biologists, if you place any stock in them, believe that language is a comparatively recent innovation in the lifespan of the human or human-like brain as an organism. So if an LLM was part of the brain, then we would say that the LLM-parts would be grafted on relatively recently to a multimodal input, not the other way around.

But I have fundamental objections to confusing a computer model that uses binary code with a brain that does not use binary code. Certainly one can analogize between the human brain and an LLM, but since the brain is not a computer and does not seem to function like one, all such analogies are potentially hazardous. Pretending the brain is literally a computer running an LLM, as you seem to be doing, is even moreso.

I'm not neuroscientist or a computer scientist - maybe the brain uses something analogous to machine learning. Certainly it would not be surprising if computer scientists, attempting to replicate human intelligence, stumbled upon similar methods (they've certainly hit on at least facially similar behavior in some respects). But it is definitely not a large language model, and it is not "running" a large language model or any software as we understand software because software is digital in nature and the brain is not digital in nature.

And if we said the same about the brain, the same would be true.

Yes, that's why qualia is such a mystery. There's no reason to believe that an LLM will ever be able to experience sensation, but I can experience sensation. Ergo, the LLM (in its present, near-present, or an directly similar future state) will never be conscious in the way that I am.

The human brain is a large language model attached to multimodal input with some as yet un-fully-ascertained hybrid processing power. I would stake my life upon it, but I have no need to, since it has already been proven to anyone who matters.

Funny how you began a thread with “I am not special” and ended it with “anyone who disagrees with me doesn’t matter.”

And if we said the same about the brain, the same would be true.

Maybe you don’t, but I have qualia. You can try to deny the reality of what I experience, but you will never convince me. And because you are the same thing as me, I assume you have the same experiences I do.

If it is only just LLMs that give you the sense that “Everything I’ve felt, everything I will ever feel, has been felt before,” and not the study of human history, let alone sharing a planet with billions of people just like you — well, that strikes me as quite a profound, and rather sad, disconnection from the human species.

You may consider your dogmas as true as I consider mine, but the one thing we both mustn’t do is pretend none of any moral or intellectual significance disagree.

I believe the argument isn't that you lack qualia, but rather that it is possible for artificial systems to experience them too.

Yeah, rereading, I made a mistake with that part, apologies.

The rest of my point still stands: this is a philosophical question, not an empirical one. We learn nothing about human consciousness from machine behavior -- certainly nothing we don't already know, even if the greatest dreams of AI boosters come true.

People who believe consciousness is a rote product of natural selection will still believe consciousness is a rote product of natural selection, and people who believe consciousness is special will still believe consciousness is special. Some may switch sides, based on inductive evidence, and some may find one more reasonable than the other. Who prevails in the judgment of history will be the side that appeals most to power, not truth, as with all changes in prevailing philosophies.

But nothing empirical is proof in the deductive sense; this still must be reasoned through, and assumptions must be made. Some will choose one assumption, one will choose the other. And like the other assumption, it is a dogma that must be chosen.

I'd be interested in hearing that argument as applied to LLMs.

I can certainly conceive of an artificial lifeform experiencing qualia. But it seems very far-fetched for LLMs in anything like their current state.