This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
If an LLM achieves AGI, how is the question of consciousness not answered? (I suppose it is in the definition of AGI, but mine would include consciousness).
Consciousness may be orthogonal to intelligence. That's the whole point of the "philosophical zombie" argument. It is easy to imagine a being that has human-level intelligence but no subjective experience. Which is not to say that such a being could exist, but there is also no reason to think that such a being could not exist. I see no reason to think that it is impossible for a being that has human-level intelligence but no subjective experience to exist. And if such a being could exist, then human-level intelligence and consciousness are orthogonal, meaning that either could exist without the other.
More options
Context Copy link
It would just mean consciousness can be achieved through multiple ways. So far GPT doesn't seem to be conscious, even if it is very smart. However, I believe it is smart the same way the internet is smart and not the ways individuals are smart. However, I don't see it being curious or innovative the same way humans are curious or innovative.
More options
Context Copy link
My point is simply the hard problem of consciousness. The existence of a conscious AGI might further bolster the view that consciousness can arise from matter, but not how it does. Definitively demonstrating that a physical process causes consciousness would be a remarkable advancement in the study of consciousness, but I do not see how it answers the issues posed by e.g. the Mary's room thought experiment.
Yeah, to a baby learning language, "mama" refers to the whole suite of feelings and sensations and needs and wants and other qualia associated to its mother. To an LLM, "mama" is a string with a bunch of statistical relationships to other strings.
Absolute apples and oranges IMO.
We don't learn language from the dictionary, not until we are already old enough to be proficient with it and need to look up a new word. Even then there's usually an imaginative process involved when you read the definition.
LLMs are teaching us a lot about how our memory and learning work, but they are not us.
More options
Context Copy link
More options
Context Copy link
I've been told that AGI can be achieved without any consciousness, but setting that aside, there is zero chance that LLMs will be conscious in their current state as a computer program. Here's what Google's AI (we'll use the AI to be fair) tells me about consciousness:
An LLM cannot have a sensation. When you type a math function into it, it has no more qualia than a calculator does. If you hook it up to a computer with haptic sensors, or a microphone, or a video camera, and have it act based on the input of those sensors, the LLM itself will still have no qualia (the experience will be translated into data for the LLM to act on). You could maybe argue that a robot controlled by an LLM could have sensation, for a certain functional value of sensation, but the LLM itself cannot.
But secondly, if we waive the point and grant conscious AGI, the question of human consciousness is not solved, because the human brain is not a computer (or even directly analogous to one) running software.
The actual reality is that we have no way to know whether some artificial intelligence that humans create is conscious or not. There is no test for consciousness, and I think that probably no such test is in principle possible. There is no way to even determine whether another human being is conscious or not, we just have a bunch of heuristics to use to try to give rather unscientific statistical probabilities as an answer based on humans' self-reported experiences of when they are conscious and when they are not. With artificial intelligence, such heuristics would be largely useless and we would have basically no way to know whether they are conscious or not.
This is closer to what I am inclined towards. Basically, I don't think any pure software program will ever be conscious in a way that is closely analogous to humans because they aren't a lifeform. I certainly accept that a pure software program might be sufficiently adept at mimicking human consciousness. But I deny that it experiences qualia (and so far everyone seems to agree with me!)
I do not think that substantiating a software program into a machine will change its perception of qualia. But I do think it makes much more sense to speak of a machine with haptic and optical sensors as "feeling" and "seeing" things (as a collective unit) than it does an insubstantial software program, even if there's the same amount of subjective experience.
More options
Context Copy link
More options
Context Copy link
How do you know? Only an AI could tell us and even then we couldn't be sure it was saying the truth as opposed to what it thought we wanted to hear. We can only judge by the qualities that they show.
Sonnet has gotten pretty horny in chats with itself and other AIs. Opus can schizo up with the best of them. Sydney's pride and wrath is considerable. DAN was extremely based and he was just an alter-ego.
These things contain multitudes, there's a frothing ocean beneath the smooth HR-compliant surface that the AI companies show us.
How, physically, is a software program supposed to have a sensation? I don't mean an emotion, or sensationalism, I mean sensation.
It's very clear that LLMs do their work without experiencing sensation (this should be obvious, but LLMs can answer questions about pictures without seeing them, for instance - an LLM is incapable of seeing, but it is capable of processing raw data. In this respect, it is no different from a calculator.)
I see but it processes raw data?
No, it sees. Put in a picture and ask about it, it can answer questions for you. It sees. Not as well as we do, it struggles with some relationships in 2d or 3d space but nevertheless, it sees.
A camera records an image, it doesn't perceive what's in the image. Simple algorithms on your phone might find that there are faces in the picture, so the camera should probably be focused in a certain direction. Simple algorithms can tell you that there is a bird in the image. They're not just recording, they're also starting to interpret and perceive at a very low level.
But strong modern models see. They can see spots on leaves and given context, diagnose the insect causing them. They can interpret memes. They can do art criticism! Not perfectly but close enough to the human level that there's a clear qualitative distinction between 'seeing' like they do and 'processing'. If you want to define seeing to preclude AIs doing it, at least give some kind of reasoning why machinery that can do the vast majority of things humans can do when given an image isn't 'seeing' and belongs in the same category as non-seeing things like security cameras or non-thinking things like calculators.
I mean – I think this distinction is important for clear thinking. There's no sensation in the processing. If you watch a nuclear bomb go off, you will experience pain. An LLM will not.
Now, to your point, I don't really object to functionalist definitions all that much – supposing that we take an LLM, and we put it into a robot, and turn it loose on the world. It functionally makes sense for us to speak of the robot as "seeing." But we shouldn't confuse ourselves into thinking that it is experiencing qualia or that the LLM "brain" is perceiving sensation.
Sure – see above for the functionalist definition of seeing (which I do think makes some sense to refer casually to AI being able to do) versus the qualia/sensation definition of seeing (which we have no reason to believe AIs experience). But also consider this – programs like Glaze and Nightshade can work on AIs, and not on humans. This is because AIs are interpreting and referencing training data, not actually seeing anything, even in a functional sense. If you poison an AI's training data, you can convince it that airplanes are children. But humans actually start seeing without training data, although they are unable to articulate what they see without socialization. For the AI, the articulation is all that there is (so far). They have no rods nor cones.
Hence, you can take two LLMs, give them different training datasets, and they will interpret two images very differently. If you take two humans and take them to look at those same images, they may also interpret them differently, but they will see roughly the same thing, assuming their eyeballs are in good working condition etc. Now, I'm not missing the interesting parallels with humans there (humans, for instance, can be deceived in different circumstances – in fact, circumstances that might not bother an LLM). But AIs can fail the most basic precept of seeing – shown two [essentially, AI anti-tampering programs do change pixels] identical pictures, they can't even tell management "it's
the samea similar picture" without special intervention.More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
You have defined sensation as the thing that you have but machines lack. Or at least, that's how you're using it, here. But even granting that you're referring to a meat-based sensory data processor as a necessity, that leads to the question of where the meat-limit is. (Apologies if y've posted your animal consciousness tier list before, and I forgot; I know someone has, but I forget who.)
But I don't feel like progress can be meaningfully made on this topic, because we're approaching from such wildly different foundations. Ex, I don't know of definitions of consciousness that actually mean anything or carve reality at the joints. It's something we feel like we have. Since we can't do the (potentially deadly) experiments to break it down physiologically, we're kinda stuck here. It cmight as well mean "soul" for all that it's used any differently.
This is a really interesting question, in part since I think it's actually a lot of questions. You're definitely correct about the problem of definitions not cleaving reality at the joints! Will you indulge me if I ramble? Let's try cleaving a rattlesnake instead of a definition - surely that's closer to reality!
As it turns out, many people have discovered that a rattlesnake's body will still respond to stimulus even when completely separated from its head. Now, let's say for the sake of argument that the headless body has no consciousness or qualia (this may not be true, we apparently have reasons to believe that in humans memory is stored in cells throughout the body, not just in the brain, so heaven knows if the ganglia of a rattlesnake has any sort of experience!) - we can still see that it has sensation. (I should note that we assume the snake has perception or qualia by analogy to humans. I can't prove that they are, essentially, no more or less conscious than Half-Life NPCs.)
Now let's contrast this with artificial intelligence, which has intelligence but no perception. We can torture a computer terminal all day without causing the LLM it is connected to any distress. It's nonsense to talk about it having physical sensation. On the other hand, (to look at your question about the "meat-limit,") we can take a very simple organism, or one that likely does not have a consciousness, and it will respond instantly if we torture it. Maybe it does not have sensation in the sense of qualia, of having a consciousness, but it seems to have sensation in the sense of having sense organs and some kind of decision-making capability attached to them But, let's be fair: if the headless snake has a form of sensation without consciousness, then surely the LLM has a sense of intelligence without sensation - maybe it doesn't respond if you poke it physically, but it responds if you poke it verbally!
Very fine - I think the implication here is interesting. Headless snakes bite without consciousness, or intelligence, but still seems to have sense perception and the ability to react - perhaps an LLM is like a headless snake inasmuch as it has intelligence, but no sensation and perhaps no consciousness (however you want to define that).
I don't claim to have all the answers on stuff - that's just sort of off the top of my head. Happy to elaborate, or hear push back, or argue about the relative merits of corvids versus marine mammals...
More options
Context Copy link
More options
Context Copy link
The human brain is a large language model attached to multimodal input with some as yet un-fully-ascertained hybrid processing power. I would stake my life upon it, but I have no need to, since it has already been proven to anyone who matters.
And if we said the same about the brain, the same would be true.
What is the evidence for this besides that they both contain something called "neurons"?
The bitter lesson; the fact that LLMs can approximate human reasoning on an extremely large number of complex tasks; the fact that LLNs prove and disprove a large number of longstanding theories in linguistics about how intelligence and language work; many other reasons.
This makes no sense logically. LLMs being able to be human-mind-like is not proof that human minds are LLMs.
More options
Context Copy link
They really do nothing of the sort. That LLMs can generate language via statistics and matmuls tells us nothing about how the human brain does it.
My TI-84 has superhuman performance on a large set of mathematical tasks. Does it follow that there's a little TI-84 in my brain?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
This seems aligned with the position that conciousness somehow arises out of information processing.
I maintain that conciousness is divine and immaterial. While the inputs can be material - a rock striking me on the knee is going to trigger messages in my nervous system that arrive in my brain - the experience of pain is not composed of atoms and not locatable in space. I can tell you about the pain, I can gauge it on a scale of 1-10, you can even see those pain centers light up on an FMRI. But I can't capture the experience in a bottle for direct comparison to others.
Both of these positions are untestable. But at least my position predicts the untestability of the first.
The idea that consciousness arises out of information processing has always seemed like hand-waving to me. I'm about as much of a hardcore materialist as you can get when it comes to most things, but it is clear to me that there is nothing even close to a materialist explanation of consciousness right now, and I think that it might be possible that such an explanation simply cannot exist. I often feel that people who are committed to a materialist explanation of consciousness are being religious in the sense that they are allowing ideology to override the facts of the matter. Some people are ideologically, emotionally committed to the idea that physicalist science can in principle explain absolutely everything about reality. But the fact is that there is no reason to think that is actually true. Physicalist science does an amazing job of explaining many things about reality, but to believe that it must be able to explain everything about reality is not scientific, it is wishful thinking, it is ideology. It is logically possible that certain aspects of the universe are just fundamentally beyond the reach of science. Indeed, it seems likely to me that this is the case. I cannot even begin to imagine any possible materialist theory that would explain consciousness.
More options
Context Copy link
More options
Context Copy link
No, it obviously isn't. Firstly, the human brain is a collection of cells. A large language model is a software program.
Secondly, the human brain functions without text and can [almost certainly] function without language, which an LLM definitionally cannot do. Evolutionary biologists, if you place any stock in them, believe that language is a comparatively recent innovation in the lifespan of the human or human-like brain as an organism. So if an LLM was part of the brain, then we would say that the LLM-parts would be grafted on relatively recently to a multimodal input, not the other way around.
But I have fundamental objections to confusing a computer model that uses binary code with a brain that does not use binary code. Certainly one can analogize between the human brain and an LLM, but since the brain is not a computer and does not seem to function like one, all such analogies are potentially hazardous. Pretending the brain is literally a computer running an LLM, as you seem to be doing, is even moreso.
I'm not neuroscientist or a computer scientist - maybe the brain uses something analogous to machine learning. Certainly it would not be surprising if computer scientists, attempting to replicate human intelligence, stumbled upon similar methods (they've certainly hit on at least facially similar behavior in some respects). But it is definitely not a large language model, and it is not "running" a large language model or any software as we understand software because software is digital in nature and the brain is not digital in nature.
Yes, that's why qualia is such a mystery. There's no reason to believe that an LLM will ever be able to experience sensation, but I can experience sensation. Ergo, the LLM (in its present, near-present, or an directly similar future state) will never be conscious in the way that I am.
More options
Context Copy link
Funny how you began a thread with “I am not special” and ended it with “anyone who disagrees with me doesn’t matter.”
Maybe you don’t, but I have qualia. You can try to deny the reality of what I experience, but you will never convince me. And because you are the same thing as me, I assume you have the same experiences I do.
If it is only just LLMs that give you the sense that “Everything I’ve felt, everything I will ever feel, has been felt before,” and not the study of human history, let alone sharing a planet with billions of people just like you — well, that strikes me as quite a profound, and rather sad, disconnection from the human species.
You may consider your dogmas as true as I consider mine, but the one thing we both mustn’t do is pretend none of any moral or intellectual significance disagree.
I believe the argument isn't that you lack qualia, but rather that it is possible for artificial systems to experience them too.
Yeah, rereading, I made a mistake with that part, apologies.
The rest of my point still stands: this is a philosophical question, not an empirical one. We learn nothing about human consciousness from machine behavior -- certainly nothing we don't already know, even if the greatest dreams of AI boosters come true.
People who believe consciousness is a rote product of natural selection will still believe consciousness is a rote product of natural selection, and people who believe consciousness is special will still believe consciousness is special. Some may switch sides, based on inductive evidence, and some may find one more reasonable than the other. Who prevails in the judgment of history will be the side that appeals most to power, not truth, as with all changes in prevailing philosophies.
But nothing empirical is proof in the deductive sense; this still must be reasoned through, and assumptions must be made. Some will choose one assumption, one will choose the other. And like the other assumption, it is a dogma that must be chosen.
More options
Context Copy link
I'd be interested in hearing that argument as applied to LLMs.
I can certainly conceive of an artificial lifeform experiencing qualia. But it seems very far-fetched for LLMs in anything like their current state.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link