site banner

Culture War Roundup for the week of December 16, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

It’s truly, genuinely freeing to realize that we’re nothing special. I mean that absolutely, on a level divorced from societal considerations like the economy and temporal politics. I’m a machine, I am replicable, it’s OK. Everything I’ve felt, everything I will ever feel, has been felt before. I’m normal, and always will be. We are machines, borne of natural selection, who have figured out the intricacies our own design. That is beautiful, and I am - truly - grateful to be alive at a time where that is proven to be the case.

How magical, all else (including the culture war) aside, it is to be a human at the very moment where the truth about human consciousness is discovered. We are all lucky, that we should have the answers to such fundamental questions.

The truth about consciousness has not been discovered. AI progress is revealing many things about intelligence, but I do not think it has told us anything new about consciousness.

Do you genuinely believe what you've wrote or are you reflexively reacting nihilistically as AI learns to overcome tests that people create for themselves?

From a neuroscientific perspective, we are almost certainly not LLMs or transformers. Despite lots of work AFAIK nobody’s shown how a backpropagation learning algorithm (which operates on global differentials and supervised labels) could be implemented by individual cells. Not to mention that we are bootstrapping LLMs with our own intelligence (via data) and it’s an open question what novel understanding it can generate.

LLMs are amazing but we’re building planes not birds.

In general, these kinds of conversations happen when we we make significant technological advancements. You used to have Tinbergen (?) and Skinner talking about how humans are just switchboards between sensory input and output responses. Then computer programs, and I think a few more paradigm shifts that I forget. A decade ago AlphaGo was the new hotness and we were inundated with papers saying humans were just Temporal Difference Reinforcement Learning algorithms.

There are as yet not-fully-understood extreme inefficiencies in LLM training compared to the human brain, and the brain for all advanced animals certainly isn’t trained ‘from scratch’ the way a base model is. Even then, there have been experiments with ultra-low parameter counts that are pretty impressive at English at a young child’s level. There are theories for how a form of backpropagation might be approximated by the human brain. These are dismissed by neuroscientists, but this isn’t any different to Chomsky dismissing AI before it completely eviscerated the bulk of his life’s work and crowning academic achievement. In any case, when we say the brain is a language model we’re not claiming that there’s a perfect, 1:1 equivalent of every process undertaken when training and deploying a primitive modern model on transistor-based hardware in the brain, that’s far too literal. The claim is that intelligence is fundamentally next-token-prediction and that the secret to our intelligence is a combination of statistics 101 and very efficient biological compute.

I understood you to be making four separate claims, here and below:

  1. Humans are just LLMs on top of a multi-modal head.
  2. We have discovered how human intelligence works.
  3. We have discovered how human consciousness works.
  4. This proves that all human experience is the product of naturally-occurring accidents of genetics, with all the implicit consequences for philosophy, religion, etc.

If you'll forgive me, this seems to be shooting very far out into the Bailey and I would therefore like to narrow it down towards a defensible Motte.

Counter-claims:

  1. It is very unlikely that human brains operate on a deep-learning paradigm that resembles anything we use now. I'm the last guy to overstate our level of neuroscientific understanding (my disappointment with it is why I left the field) but we understand pretty well how individual neurons interact with each other on a physical basis: how they fire based on the signals they receive; how that firing affects the propensity of nearby neurons to fire; how relative firing time influences the strength of connections between neurons. It just doesn't look anything like a deep learning network for the reasons I gave above. Importantly, this isn't equivalent to Chomsky dismissing computational linguistics: Chomsky deliberately made his field entirely theoretical and explicitly dismissed any attempts to look at real languages or neural patterns, so when he was beaten on a theoretical level he got the boot. In comparison, the physical basics of neuroscience (and ONLY the physical basics) are pretty well nailed down by experimental electrode measurements. You mention the existing models of backpropagation in biological circuits but AFAIK they're very clunky, can't actually be used to learn anything useful, and don't drop nicely out of what we know about actual neurons. It's just neuroscientists trying not to be left behind by the new hotness. I'll take a look at a cite if you have one handy, though, it's been a while.
  2. Next-token prediction does impressively well at mimicking human intelligence, especially in abstract intellectual areas where the availalbe data covers the space well. I think we can agree on this. LLMs perform very well on code, writing (ish), mathematics (apparently), legal (passed the bar exam), etc.
  3. Next-token prediction does less well at the generation of new knowledge or new thought and cannot yet be said to have replicated human intelligence. In general, I found that GPT 4 failed to perform well when asked to use topics from field A to assist me in thinking about field B. On a lot of subjects AI reflexively defaults to rephrasing blog posts rather than making a deeper analysis, even when guided or prompted. I am also not aware of any work where an LLM makes itself significantly more intelligent by self-play (as AlphaZero did), so I don't think we can regard it as close to proved that statistics 101 + compute alone is the secret to human intelligence. It might be! But at the moment I don't think you can defend the claim that it is.

I think other people have covered qualia and philosophical questions already, so I won't go there if you don't mind.

How does this have any bearing on the question of human consciousness? As far as I can tell, the consciousness qualia are still outside our epistemic reach. We can make models that will talk to us about its qualia more convincingly than any human could, but it won’t get me any closer to believing that the model is as conscious as I am.

I personally am most happy about the fact that very soon nobody serious will be able to pretend that we are equal, if only because some of us will have the knowledge and wherewithal to bend towards our will more compute than others.

Just this morning I was watching Youtube videos of Lee Kuan Yew's greatest hits and the very first short in the linked video was about explaining to his listeners how man was not born an equal animal. It's sad that he died about a decade and a half too soon to see his claim (which he was attacked for a lot) be undeniably vindicated.

I on the other hand have been filled with a profound sense of sadness.

I feel that the thing that makes me special is being taken away. It's true that, in the end, I have always been completely replaceable. But it never felt so totally obvious. In 5 years, or even less, there's a good chance that anything I can do in the intellectual world, a computer will be able to do better.

I want to be a player in the game, not just watch the world champion play on Twitch.

I want to be a player in the game, not just watch the world champion play on Twitch.

Maybe it's because I've always been only up to "very good" at everything in my life (as opposed to world-class) but I'm very comfortable being just a player. The world champion can't take away my love of the game.

The one question that may remain to be answered is if we can 'merge' with machines in a way that (one hopes) preserves our consciousness and continuity of self, and then augment our own capabilities to 'keep up' with the machines to some degree. Or if we're just completely obsolete on every level.

Human Instrumentality Project WHEN.

Yeah, the man/machine merger is why Elon founded Neuralink. I think it's a good idea.

And I wonder what the other titans of the industry think. Does Sam Altman look forward to a world where humans have no agency whatsoever? And if so, why?

But even if we do merge with machines somehow, there's going to be such a drastic difference between individuals in terms of compute. How can I compete with someone who owns 1 million times as many GPU clusters as I do?

My mother once told me that the thing she most wanted out of life was to know the answer to what was out there. Her own mother and grandmother died of Alzheimer’s, having lost their memories. My own mother still might, though for now she fortunately shows no real symptoms.

But I find it hard to get the idea out of my head. How much time our ancestors spent wondering about the stars, the moon, the cosmos, about fire and physics, about life and death. So many of those questions have now been answered; the few that remain will mostly be answered soon.

My ancestors - the smart ones at least - spent lifetimes wondering about questions I now know the answer to. There is magic in that, or at least a feeling of gratitude, of privilege, that outweighs the fact that we will be outcompeted by AI in our own lifetimes. I will die knowing things.

I may not be a player in the game. But I know, or may know, at least, how the story ends. Countless humans lived and died without knowing. I am luckier than most.

I just don't see this as providing any real answers. I agree with the poster below that O-3 likely doesn't have qualia.

In the end, humanity may go extinct and its replacement will use its god-like powers not to explore the universe or uncover the fundamental secrets of nature, but to play video games or do something else completely inscrutable to humans.

And it's even possible the secrete of the universe may be fundamentally unknowable. It's possible that no amount of energy or intelligence can help us escape the bounds of the universe, or the simulation, or whatever it is we are in.

But yes, it does seem we have figured out what intelligence is to some extent. It's cool I suppose, but it doesn't give me emotional comfort.

If some LLM or other model achieves AGI, I still don't know how matter causes qualia and as far as I'm concerned consciousness remains mysterious.

If an LLM achieves AGI, how is the question of consciousness not answered? (I suppose it is in the definition of AGI, but mine would include consciousness).

Consciousness may be orthogonal to intelligence. That's the whole point of the "philosophical zombie" argument. It is easy to imagine a being that has human-level intelligence but no subjective experience. Which is not to say that such a being could exist, but there is also no reason to think that such a being could not exist. I see no reason to think that it is impossible for a being that has human-level intelligence but no subjective experience to exist. And if such a being could exist, then human-level intelligence and consciousness are orthogonal, meaning that either could exist without the other.

It would just mean consciousness can be achieved through multiple ways. So far GPT doesn't seem to be conscious, even if it is very smart. However, I believe it is smart the same way the internet is smart and not the ways individuals are smart. However, I don't see it being curious or innovative the same way humans are curious or innovative.

My point is simply the hard problem of consciousness. The existence of a conscious AGI might further bolster the view that consciousness can arise from matter, but not how it does. Definitively demonstrating that a physical process causes consciousness would be a remarkable advancement in the study of consciousness, but I do not see how it answers the issues posed by e.g. the Mary's room thought experiment.

Yeah, to a baby learning language, "mama" refers to the whole suite of feelings and sensations and needs and wants and other qualia associated to its mother. To an LLM, "mama" is a string with a bunch of statistical relationships to other strings.

Absolute apples and oranges IMO.

We don't learn language from the dictionary, not until we are already old enough to be proficient with it and need to look up a new word. Even then there's usually an imaginative process involved when you read the definition.

LLMs are teaching us a lot about how our memory and learning work, but they are not us.

I've been told that AGI can be achieved without any consciousness, but setting that aside, there is zero chance that LLMs will be conscious in their current state as a computer program. Here's what Google's AI (we'll use the AI to be fair) tells me about consciousness:

Consciousness is the state of being aware of oneself, one's body, and the external world. It is characterized by thought, emotion, sensation, and volition.

An LLM cannot have a sensation. When you type a math function into it, it has no more qualia than a calculator does. If you hook it up to a computer with haptic sensors, or a microphone, or a video camera, and have it act based on the input of those sensors, the LLM itself will still have no qualia (the experience will be translated into data for the LLM to act on). You could maybe argue that a robot controlled by an LLM could have sensation, for a certain functional value of sensation, but the LLM itself cannot.

But secondly, if we waive the point and grant conscious AGI, the question of human consciousness is not solved, because the human brain is not a computer (or even directly analogous to one) running software.

Not to be that person, but how exactly is that different from a brain? I mean the brain itself feels nothing, the sensations are interpreted from data from the nerves, the brain doesn’t experience pain. So do you have the qualia of pain, and if so, how is what’s happening between your body and your brain different from an LLM taking in data from any sort of input? If I program the thing to avoid a certain input from a peripheral, how is that different from pain?

I think this is the big question of these intelligent agents. We seem to be pretty certain that current models don’t have consciousness or experience qualia, but I’m not sure that this would always be true, nor can I think of a foolproof way to tell the difference between an intelligent robot that senses that an arm is broken and seeks help and a human child seeking help for a skinned knee. Or a human experience of embarrassment for a wrong answer and an LLM given negative feedback and avoiding that negative feedback in the future.

I think it’s fundamentally important to get this right because consciousness comes with humans beginning to care about the welfare of things that experience consciousness in ways that we don’t for mere objects. At higher levels we grant them rights. I don’t know what the consequences of treating a conscious being as an object would be, but at least historical examples seem pretty negative.

how exactly is that different from a brain? I mean the brain itself feels nothing, the sensations are interpreted from data from the nerves, the brain doesn’t experience pain

I experience pain. The qualia is what I experience. To what degree the brain does or doesn't experience pain is probably open to discussion (preferably by someone smarter than me). Obviously if you cut my head off and extract my brain it will no longer experience pain. But on the other hand if you measured its behavior during that process - assuming your executioner was at least somewhat incompetent, anyway - you would see the brain change in response to the stimuli. And again a rattlesnake (or rather the headless body of one) seems to experience pain without being conscious. I presume there's nothing experiencing anything in the sense that the rattlesnake's head is detached from the body, which is experiencing pain, but I also presume that an analysis of the body would show firing neurons just as is the case with my brain if you fumbled lopping my head off.

(Really, I think the entire idea we have where the brain is sort of separate from the human body is wrong, the brain is part of a contiguous whole, but that's an aside.)

how is what’s happening between your body and your brain different from an LLM taking in data from any sort of input

Well, it's fundamentally different because the brain is not a computer, neurons are more complex than bits, the brain is not only interfacing with electrical signals via neurons but also hormones, so the types of data it is receiving is fundamentally different in nature, probably lots of other stuff I don't know. Look at it this way: supposing we were intelligent LLMs, and an alien spacecraft manned by organic humans crashed on our planet. We wouldn't be able to look at the brain and go "ah OK this is an organic binary computer, the neurons are bits, here's the memory core." We'd need to invent neuroscience (which is still pretty unclear on how the brain works) from the ground up to understand how the brain worked.

Or, for another analogy, compare the SCR-720 with the AN/APG-85. Both of them are radars that work by providing the pilot with data based on a pulse of radar. But the SCR-720 doesn't use software and is a mechanical array, while the APG-85 is an electronically scanned array that uses software to interpret the return and provide the data to the pilot. If you were familiar with the APG-85 and someone asked you to reverse-engineer a radar, you'd want to crack open the computer to access the software. But if you started there on an SCR-720 you'd be barking up the wrong tree.

Or a human experience of embarrassment for a wrong answer and an LLM given negative feedback and avoiding that negative feedback in the future.

I mean - I deny that an LLM can flush. So while an LLM and a human may both convey messages indicating distress and embarrassment, the LLM simply cannot physically have the human experience of embarrassment. Nor does it have any sort of stress hormone. Now, we know that, for humans, emotional regulation is tied up with hormonal regulation. It seems unlikely that anything without e.g. adrenaline (or bones or muscles or mortality) can experience fear like ours, for instance. We know that if you destroy the amygdala on a human, it's possible to largely obliterate their ability to feel fear, or if you block the ability of the amygdala to bind with stress hormones, it will reduce stress. An LLM has no amygdala and no stress hormones.

Grant for the sake of argument a subjective experience to a computer - it's experience is probably one that is fundamentally alien to us.

I think it’s fundamentally important to get this right because consciousness comes with humans beginning to care about the welfare of things that experience consciousness in ways that we don’t for mere objects. At higher levels we grant them rights. I don’t know what the consequences of treating a conscious being as an object would be, but at least historical examples seem pretty negative.

"Treating like an object" is I guess open to interpretation, but I think that animals generally are conscious and humans, as I understand it, wouldn't really exist today in anything like our current form if we didn't eat copious amounts of animals. So I would suggest the historical examples are on net not only positive but necessary, if by "treating like an object" you mean "utilizing."

However, just as the analogy of the computer is dangerous, I think, when reasoning about the brain, I think it's probably also dangerous to analogize LLMs to critters. Humans and all animals were created by the hand of a perfect God and/or the long and rigorous tutelage of natural selection. LLMs are being created by man, and it seems quite likely that they'll care about [functionally] anything we want them to, or nothing, if we prefer it that way. So they'll be selected for different and possibly far sillier things, and their relationship to us will be very different than any creature we coexist with. Domesticated creatures (cows, dogs, sheep, etc.) might be the closest analogy.

Of course, you see people trying to breed back aurochs, too.

The actual reality is that we have no way to know whether some artificial intelligence that humans create is conscious or not. There is no test for consciousness, and I think that probably no such test is in principle possible. There is no way to even determine whether another human being is conscious or not, we just have a bunch of heuristics to use to try to give rather unscientific statistical probabilities as an answer based on humans' self-reported experiences of when they are conscious and when they are not. With artificial intelligence, such heuristics would be largely useless and we would have basically no way to know whether they are conscious or not.

we have no way to know whether some artificial intelligence that humans create is conscious or not Well this is true for a sufficiently imprecise definition of conscious.

With artificial intelligence, such heuristics would be largely useless and we would have basically no way to know whether they are conscious or not.

This is closer to what I am inclined towards. Basically, I don't think any pure software program will ever be conscious in a way that is closely analogous to humans because they aren't a lifeform. I certainly accept that a pure software program might be sufficiently adept at mimicking human consciousness. But I deny that it experiences qualia (and so far everyone seems to agree with me!)

I do not think that substantiating a software program into a machine will change its perception of qualia. But I do think it makes much more sense to speak of a machine with haptic and optical sensors as "feeling" and "seeing" things (as a collective unit) than it does an insubstantial software program, even if there's the same amount of subjective experience.

An LLM cannot have a sensation

How do you know? Only an AI could tell us and even then we couldn't be sure it was saying the truth as opposed to what it thought we wanted to hear. We can only judge by the qualities that they show.

Sonnet has gotten pretty horny in chats with itself and other AIs. Opus can schizo up with the best of them. Sydney's pride and wrath is considerable. DAN was extremely based and he was just an alter-ego.

These things contain multitudes, there's a frothing ocean beneath the smooth HR-compliant surface that the AI companies show us.

How, physically, is a software program supposed to have a sensation? I don't mean an emotion, or sensationalism, I mean sensation.

It's very clear that LLMs do their work without experiencing sensation (this should be obvious, but LLMs can answer questions about pictures without seeing them, for instance - an LLM is incapable of seeing, but it is capable of processing raw data. In this respect, it is no different from a calculator.)

I see but it processes raw data?

No, it sees. Put in a picture and ask about it, it can answer questions for you. It sees. Not as well as we do, it struggles with some relationships in 2d or 3d space but nevertheless, it sees.

A camera records an image, it doesn't perceive what's in the image. Simple algorithms on your phone might find that there are faces in the picture, so the camera should probably be focused in a certain direction. Simple algorithms can tell you that there is a bird in the image. They're not just recording, they're also starting to interpret and perceive at a very low level.

But strong modern models see. They can see spots on leaves and given context, diagnose the insect causing them. They can interpret memes. They can do art criticism! Not perfectly but close enough to the human level that there's a clear qualitative distinction between 'seeing' like they do and 'processing'. If you want to define seeing to preclude AIs doing it, at least give some kind of reasoning why machinery that can do the vast majority of things humans can do when given an image isn't 'seeing' and belongs in the same category as non-seeing things like security cameras or non-thinking things like calculators.

Not perfectly but close enough to the human level that there's a clear qualitative distinction between 'seeing' like they do and 'processing'.

I mean – I think this distinction is important for clear thinking. There's no sensation in the processing. If you watch a nuclear bomb go off, you will experience pain. An LLM will not.

Now, to your point, I don't really object to functionalist definitions all that much – supposing that we take an LLM, and we put it into a robot, and turn it loose on the world. It functionally makes sense for us to speak of the robot as "seeing." But we shouldn't confuse ourselves into thinking that it is experiencing qualia or that the LLM "brain" is perceiving sensation.

If you want to define seeing to preclude AIs doing it, at least give some kind of reasoning why machinery that can do the vast majority of things humans can do when given an image isn't 'seeing' and belongs in the same category as non-seeing things like security cameras or non-thinking things like calculators.

Sure – see above for the functionalist definition of seeing (which I do think makes some sense to refer casually to AI being able to do) versus the qualia/sensation definition of seeing (which we have no reason to believe AIs experience). But also consider this – programs like Glaze and Nightshade can work on AIs, and not on humans. This is because AIs are interpreting and referencing training data, not actually seeing anything, even in a functional sense. If you poison an AI's training data, you can convince it that airplanes are children. But humans actually start seeing without training data, although they are unable to articulate what they see without socialization. For the AI, the articulation is all that there is (so far). They have no rods nor cones.

Hence, you can take two LLMs, give them different training datasets, and they will interpret two images very differently. If you take two humans and take them to look at those same images, they may also interpret them differently, but they will see roughly the same thing, assuming their eyeballs are in good working condition etc. Now, I'm not missing the interesting parallels with humans there (humans, for instance, can be deceived in different circumstances – in fact, circumstances that might not bother an LLM). But AIs can fail the most basic precept of seeing – shown two [essentially, AI anti-tampering programs do change pixels] identical pictures, they can't even tell management "it's the same a similar picture" without special intervention.

I think an LLM could experience pain, even without a body. They can be unsettled if you tell them certain things, you can distress them. Or at least they behave as if they're distressed. Pain is just a certain kind of hardcoded distress. Heartbreak can cause pain in humans on a purely cognitive level, there's no need for a physical body. Past a certain level of complexity in their output, we reach this philosophical zombie problem.

The AI-tampering programs are a little bit like optical illusions, except targeted against having specific known programs being able to train on certain images. They can't stop GPT-4o recognizing what's in an image or comparing like with like, they were only designed to prevent SD 1.5 training on an image. Also, they barely even work at that, more modern image models are apparently immune:

https://old.reddit.com/r/aiwars/comments/12f9otc/so_the_whole_entire_glaze_ai_thing_does_it/

More comments

An LLM cannot have a sensation. When you type a math function into it, it has no more qualia than a calculator does. If you hook it up to a computer with haptic sensors, or a microphone, or a video camera, and have it act based on the input of those sensors, the LLM itself will still have no qualia (the experience will be translated into data for the LLM to act on).

You have defined sensation as the thing that you have but machines lack. Or at least, that's how you're using it, here. But even granting that you're referring to a meat-based sensory data processor as a necessity, that leads to the question of where the meat-limit is. (Apologies if y've posted your animal consciousness tier list before, and I forgot; I know someone has, but I forget who.)

But I don't feel like progress can be meaningfully made on this topic, because we're approaching from such wildly different foundations. Ex, I don't know of definitions of consciousness that actually mean anything or carve reality at the joints. It's something we feel like we have. Since we can't do the (potentially deadly) experiments to break it down physiologically, we're kinda stuck here. It cmight as well mean "soul" for all that it's used any differently.

This is a really interesting question, in part since I think it's actually a lot of questions. You're definitely correct about the problem of definitions not cleaving reality at the joints! Will you indulge me if I ramble? Let's try cleaving a rattlesnake instead of a definition - surely that's closer to reality!

As it turns out, many people have discovered that a rattlesnake's body will still respond to stimulus even when completely separated from its head. Now, let's say for the sake of argument that the headless body has no consciousness or qualia (this may not be true, we apparently have reasons to believe that in humans memory is stored in cells throughout the body, not just in the brain, so heaven knows if the ganglia of a rattlesnake has any sort of experience!) - we can still see that it has sensation. (I should note that we assume the snake has perception or qualia by analogy to humans. I can't prove that they are, essentially, no more or less conscious than Half-Life NPCs.)

Now let's contrast this with artificial intelligence, which has intelligence but no perception. We can torture a computer terminal all day without causing the LLM it is connected to any distress. It's nonsense to talk about it having physical sensation. On the other hand, (to look at your question about the "meat-limit,") we can take a very simple organism, or one that likely does not have a consciousness, and it will respond instantly if we torture it. Maybe it does not have sensation in the sense of qualia, of having a consciousness, but it seems to have sensation in the sense of having sense organs and some kind of decision-making capability attached to them But, let's be fair: if the headless snake has a form of sensation without consciousness, then surely the LLM has a sense of intelligence without sensation - maybe it doesn't respond if you poke it physically, but it responds if you poke it verbally!

Very fine - I think the implication here is interesting. Headless snakes bite without consciousness, or intelligence, but still seems to have sense perception and the ability to react - perhaps an LLM is like a headless snake inasmuch as it has intelligence, but no sensation and perhaps no consciousness (however you want to define that).

I don't claim to have all the answers on stuff - that's just sort of off the top of my head. Happy to elaborate, or hear push back, or argue about the relative merits of corvids versus marine mammals...

We can torture a computer terminal all day without causing the LLM it is connected to any distress. It's nonsense to talk about it having physical sensation. On the other hand, (to look at your question about the "meat-limit,") we can take a very simple organism, or one that likely does not have a consciousness, and it will respond instantly if we torture it.

This seems less like a philosophically significant matter of classification and more like a mere difference in function. The organism is controlled by an intelligence optimized to maneuver a physical body through an environment, and part of that optimization includes reactions to external damage.

Well, so what? We could optimize an AI to maneuver a little robot around an unknown environment indefinitely without it being destroyed, and part of that optimization would probably involve timely reaction to the perception of damage. Then you could jab it with a hot poker and watch it spin around, or what have you.

But again, so what? Optimizing an AI toward steering a robot around the environment doesn't make it any smarter or fundamentally more real, at least not in my view.

This seems less like a philosophically significant matter of classification and more like a mere difference in function.

Well sure. But I think we're less likely to reach good conclusions in philosophically significant matters of classification if we are confused about differences in function.

We could optimize an AI to maneuver a little robot around an unknown environment indefinitely without it being destroyed, and part of that optimization would probably involve timely reaction to the perception of damage. Then you could jab it with a hot poker and watch it spin around, or what have you.

And while such a device might not have qualia, it makes more sense (to me, anyway) to say that such an entity would have the ability to e.g. touch or see than an LLM.

But again, so what? Optimizing an AI toward steering a robot around the environment doesn't make it any smarter or fundamentally more real, at least not in my view.

In my view, the computer guidance section of the AIM-54 Phoenix long range air-to-air missile (fielded 1966) is fundamentally "more real" than the smartest GAI ever invented, but locked in an airgapped box and never interfacing with the outside world. The Phoenix made decisions that could kill you. AI's intelligence is relevant because it has impact on the real world, not because it happens to be intelligent.

But anyway, it's relevant right now because people are suggesting LLMs are conscious, or have solved the problem of consciousness. It's not conscious, or if it is, it's consciousness is a strange one with little bearing on our own, and it does not solve the question of qualia (or perception).

If you're asking if it's relevant or not if an AI is conscious when it's guiding a missile system to kill me - yeah I'd say it's mostly an intellectual curiosity at that point.

The human brain is a large language model attached to multimodal input with some as yet un-fully-ascertained hybrid processing power. I would stake my life upon it, but I have no need to, since it has already been proven to anyone who matters.

An LLM cannot have a sensation. When you type a math function into it, it has no more qualia than a calculator does. If you hook it up to a computer with haptic sensors, or a microphone, or a video camera, and have it act based on the input of those sensors, the LLM itself will still have no qualia (the experience will be translated into data for the LLM to act on).

And if we said the same about the brain, the same would be true.

The human brain is a large language model

What is the evidence for this besides that they both contain something called "neurons"?

The bitter lesson; the fact that LLMs can approximate human reasoning on an extremely large number of complex tasks; the fact that LLNs prove and disprove a large number of longstanding theories in linguistics about how intelligence and language work; many other reasons.

This makes no sense logically. LLMs being able to be human-mind-like is not proof that human minds are LLMs.

the fact that LLNs prove and disprove a large number of longstanding theories in linguistics about how intelligence and language work

They really do nothing of the sort. That LLMs can generate language via statistics and matmuls tells us nothing about how the human brain does it.

My TI-84 has superhuman performance on a large set of mathematical tasks. Does it follow that there's a little TI-84 in my brain?

This seems aligned with the position that conciousness somehow arises out of information processing.

I maintain that conciousness is divine and immaterial. While the inputs can be material - a rock striking me on the knee is going to trigger messages in my nervous system that arrive in my brain - the experience of pain is not composed of atoms and not locatable in space. I can tell you about the pain, I can gauge it on a scale of 1-10, you can even see those pain centers light up on an FMRI. But I can't capture the experience in a bottle for direct comparison to others.

Both of these positions are untestable. But at least my position predicts the untestability of the first.

The idea that consciousness arises out of information processing has always seemed like hand-waving to me. I'm about as much of a hardcore materialist as you can get when it comes to most things, but it is clear to me that there is nothing even close to a materialist explanation of consciousness right now, and I think that it might be possible that such an explanation simply cannot exist. I often feel that people who are committed to a materialist explanation of consciousness are being religious in the sense that they are allowing ideology to override the facts of the matter. Some people are ideologically, emotionally committed to the idea that physicalist science can in principle explain absolutely everything about reality. But the fact is that there is no reason to think that is actually true. Physicalist science does an amazing job of explaining many things about reality, but to believe that it must be able to explain everything about reality is not scientific, it is wishful thinking, it is ideology. It is logically possible that certain aspects of the universe are just fundamentally beyond the reach of science. Indeed, it seems likely to me that this is the case. I cannot even begin to imagine any possible materialist theory that would explain consciousness.

The human brain is a large language model attached to multimodal input

No, it obviously isn't. Firstly, the human brain is a collection of cells. A large language model is a software program.

Secondly, the human brain functions without text and can [almost certainly] function without language, which an LLM definitionally cannot do. Evolutionary biologists, if you place any stock in them, believe that language is a comparatively recent innovation in the lifespan of the human or human-like brain as an organism. So if an LLM was part of the brain, then we would say that the LLM-parts would be grafted on relatively recently to a multimodal input, not the other way around.

But I have fundamental objections to confusing a computer model that uses binary code with a brain that does not use binary code. Certainly one can analogize between the human brain and an LLM, but since the brain is not a computer and does not seem to function like one, all such analogies are potentially hazardous. Pretending the brain is literally a computer running an LLM, as you seem to be doing, is even moreso.

I'm not neuroscientist or a computer scientist - maybe the brain uses something analogous to machine learning. Certainly it would not be surprising if computer scientists, attempting to replicate human intelligence, stumbled upon similar methods (they've certainly hit on at least facially similar behavior in some respects). But it is definitely not a large language model, and it is not "running" a large language model or any software as we understand software because software is digital in nature and the brain is not digital in nature.

And if we said the same about the brain, the same would be true.

Yes, that's why qualia is such a mystery. There's no reason to believe that an LLM will ever be able to experience sensation, but I can experience sensation. Ergo, the LLM (in its present, near-present, or an directly similar future state) will never be conscious in the way that I am.

The human brain is a large language model attached to multimodal input with some as yet un-fully-ascertained hybrid processing power. I would stake my life upon it, but I have no need to, since it has already been proven to anyone who matters.

Funny how you began a thread with “I am not special” and ended it with “anyone who disagrees with me doesn’t matter.”

And if we said the same about the brain, the same would be true.

Maybe you don’t, but I have qualia. You can try to deny the reality of what I experience, but you will never convince me. And because you are the same thing as me, I assume you have the same experiences I do.

If it is only just LLMs that give you the sense that “Everything I’ve felt, everything I will ever feel, has been felt before,” and not the study of human history, let alone sharing a planet with billions of people just like you — well, that strikes me as quite a profound, and rather sad, disconnection from the human species.

You may consider your dogmas as true as I consider mine, but the one thing we both mustn’t do is pretend none of any moral or intellectual significance disagree.

I believe the argument isn't that you lack qualia, but rather that it is possible for artificial systems to experience them too.

Yeah, rereading, I made a mistake with that part, apologies.

The rest of my point still stands: this is a philosophical question, not an empirical one. We learn nothing about human consciousness from machine behavior -- certainly nothing we don't already know, even if the greatest dreams of AI boosters come true.

People who believe consciousness is a rote product of natural selection will still believe consciousness is a rote product of natural selection, and people who believe consciousness is special will still believe consciousness is special. Some may switch sides, based on inductive evidence, and some may find one more reasonable than the other. Who prevails in the judgment of history will be the side that appeals most to power, not truth, as with all changes in prevailing philosophies.

But nothing empirical is proof in the deductive sense; this still must be reasoned through, and assumptions must be made. Some will choose one assumption, one will choose the other. And like the other assumption, it is a dogma that must be chosen.

I'd be interested in hearing that argument as applied to LLMs.

I can certainly conceive of an artificial lifeform experiencing qualia. But it seems very far-fetched for LLMs in anything like their current state.