This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
I’ve never understood how the Turing test measured anything useful. The test doesn’t even require that the AI agent understand anything about its world or even the questions being asked of it. It just has to do well enough to convince a human that it can do so. That’s the entire point of the Chinese room rejoinder— an agent might well be clever enough to fool a person into thinking it understands just by giving reasonable no answers to questions posed.
The real test, to me, is more of a practical thing — can I drop the AI in a novel situation and expect it to figure out how to solve the problems. Can I take a bot trained entirely on being an English chatbot and expect it to learn Japanese just by interacting with Japanese users? Can I take a chatbot like that and expect it to learn to solve physics equations? That seems a much better test because intelligent agents are capable of learning new things.
The Turing test is an insanely strong test, in the sense that an AI that passes it can be seen to have achieved human-level intelligence at the very least. By this I mean the proper, adversarial test with a fully motivated and intelligent tester (and ideally, the same for the human participant).
Certainly no AI today could pass such a thing. The current large SotA models will simply tell you they are an AI model if you ask. They will outright deny being human or having emotions. I don't know how anyone could think these models ever passed a Turing test, unless the tester was a hopeless moron who didn't even try.
One could object that they might pass if people bothered to finetune them to do so. But that is a much weaker claim akin to "I could win that marathon if I ever bothered to get up from this couch." Certainly they haven't passed any such tests today. And I doubt any current AI could, even if they tried.
In fact, I expect we'll see true superhuman AGI long before such a test is consistently passed. We're much smarter than dogs, but that doesn't mean we can fully imitate them. Just like it takes a lot more compute to emulate a console such as the SNES than such devices originally had, I think it will require a lot of surplus intelligence to pretend to be human convincingly. If there is anything wrong with the Turing test, it's that it's way too hard.
More options
Context Copy link
The Turing test is like the Bechdel test. It’s not a perfect heuristic, and it’s misused in a lot of ways, but the point is that it’s a fairly low bar that most things at the time still weren’t able to clear.
More options
Context Copy link
I am flabbergasted by people, including the person who came up with the Chinese Room thought experiment, Searle, not seeing what seems to me to be the obvious conclusion:
The room speaks Chinese.
(Is that a problem? No, not at all. I just didn't think you'd be Chinese)
No individual component of the room speaks Chinese, including the human, but that is no impediment. No single neuron in your brain speaks English, but we have zero qualms about saying the entire network, i.e your brain, does.
Searle literally addressed this objection in his very first paper on the Chinese Room.
Seems nonsensical to me. I fail to see how this person could have that inside their brain and fail to speak Chinese. How is that even physically possible?
So, take throwing a ball. The brain’s doing a ton of heavy lifting, solving inverse kinematics, adjusting muscle tension, factoring in distance and wind and all in real time, below the level of conscious awareness. You don’t explicitly think, “Okay, flex the biceps at 23.4 degrees, then release at t=0.72 seconds.” You just do it. The calculations happen in the background, and you’d be hard-pressed to explain the exact math or physics step-by-step. Yet, if someone said, “You can’t throw a ball because you don’t consciously understand the equations,” you’d rightly call that nonsense. You can throw the ball - your ability proves it, even if the “how” is opaque to your conscious mind.
If Searle were to attempt to rebutt this by saying, nah, you're just doing computations in your head without actually "knowing" how to throw a ball, then I'd call him a dense motherfucker and ask if he knows how the human brain works.
Well, he would be able to converse in chinese, and to converse in english, but not able to translate between them. That seems very possible; theres propably some brain disorder where you do that.
More options
Context Copy link
The answer is that he doesn't understand Chinese, he plus the room understand Chinese.
More options
Context Copy link
If someone internalizes the system in his head, ignoring practicality (which makes it hard to properly imagine the situation), then he's acting as a dumb CPU executing a Chinese program. The answer is still "the man doesn't know Chinese, the system does". The answer feels strange because "the man" is in the man's head and "the system" is also in the man's head, but that doesn't make them the same thing or mean that they both have the same knowledge.
Of course, in Searle's time, "come on, he's running a virtual machine" isn't something you could really say because people weren't familiar with the concept.
Virtual machines were a thing since 1965, and Searle wrote his nonsense about intentionality in 1983, and the Chinese Room in 1980.
If someone has the gall to claim to disprove the possibility of artificial intelligence, as he set out to do, it would help to have some understanding of computer science. But alas.
I agree with you but Searle and his defenders wouldn't. As far as I'm concerned, it matters not a jot if the system is embedded inside a brain, up an arse, or in the room their arse is resting in.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
You're just a Functionalist, exactly the sort of people the argument is supposed to criticize. Or you're missing the point.
Searles is a Biological Realist, which is to say that he believes that processes of mind are real things that emerge from the biochemical processes of human beings and that language (and symbol manipulation in general) is a reflection of those processes, not the process in itself. He thinks thoughts are real things that exist outside of language.
To wit, he argues that what the room is missing is "intentionality". It does not have the ability do to anything but react to input in ways that are predetermined by the design of the chinese manual, and insofar as any of its components are concerned (or the totality thereof) they are incapable of reflecting upon the ideas being manipulated.
Your brain does "speak chinese" properly speaking because it is able to communicate intentional thoughts using that medium. The mere ability to hold conversation does not qualify to what Searles is trying to delineate.
Not too familiar with Searle's argument, but isn't this just saying that the lack of ability to generalize out of distribution is the issue? But I don't get how being able to react to novel inputs (in a useful way) would even help things much. Suppose one did come up with a finite set of rules that allowed one to output Chinese to arbitrary inputs in highly intelligent, coherent ways. It's still, AFAICT, still just a room with a guy inside to Searle.
Perhaps it's the ability to learn. But even then, you could have the guy follow some RL algorithm to update the symbols in the translation lookup algorithm book, and it's still just a guy in the room (to Searle).
It's not even clear to me how one could resolve this: at some point, a guy in the room could be manipulating symbols in a way that mirrors Xi Jinpeng's neural activations arbitrarily closely (with a big enough room and a long enough time), and Searle and I would immediately come to completely confident and opposite conclusions about the nature of the room. It just seems flatly ridiculous to me that the presence of dopamine and glutamate impart consciousness to a system, but I don't get how to argue against that (or even get how Searle would say that's different from his actual argument).
Insofar as this is possible, (I believe Searle disagrees that it is), then the room does speak Chinese because it's just a brain.
But as I explain in the other thread, this means assuming computationalism is true, which renders the whole though experiment moot since it's supposed to be a criticism of computationalism.
I'm not sure how one would argue that it's not possible. Is the contention that there's something ineffable happening in neurons that fundamentally can't be copied via a larger model? That seems isomorphic to a "god of the gaps" argument to me.
"God of the gaps" cuts both ways. The cached Materialist narrative has some very large holes in it that are bridged through unexamined axioms and predictions that never update when falsified.
More options
Context Copy link
It's quite simple in a world where the experience of consciousness is unexplained and machines don't offer the same sort of spontaneous behavior as humans, actually.
It's not compatible with strict materialism, but then again most people don't believe in strict materialism.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
My response is that there isn't a point to miss.
That strikes me as the genuine opposite of what someone with a realistic understanding of biology would believe, but I guess people can call themselves whatever they like. It strikes me as unfalsifiable Cartesian Dualism or a close relative, and worth spending no more time rebutting with evidence than it was forwarded without evidence.
What is so mysterious about this "intentionality"? Give the Room a prompt that requires it to reason about being a Chinese Room. Problem solved.
What is the mechanism by which a thought is imbued with "intentionality"? Where, from a single neuron, to a brain lobe, to the whole human, does it arise?
Realism is a term of art in metaphysics in particular and philosophy in general. It is the view that reality exists outside of the processes of the mind, the opposite of anti-realism (solipsism, skepticism, etc).
For what it's worth, Skepticism, which I take to be your view if you're making this objection, is also unfalsifiable. As are all statements in metaphysics.
I happen to be a metaphysical skeptic myself, but this isn't an argument. We're talking about something more fundamental than notions of falsifiablity or correspondence. You can't use logic even.
What isn't? Consciousness is the most mysterious phenomenon I have ever experienced. It is so mysterious in fact as to be a centerpiece of many religious traditions.
Why do humans go about doing things on their own instead of merely reacting to their environment? Is it just a more complex form of the instinctive behavior we see in other animals or something entirely different? And why do I have qualia? These are mysteries.
Unless the writer of the manual understands reasoning to a sufficient degree as to provide exhaustive answer to all possible questions of the mind, this isn't possible. And certainly isn't within the purview of the thought experiment as originally devised.
We don't know yet. We may possibly never know. But we can observe the phenomenon all the same.
If we really want to get into this, then proving (and disproving) anything is mathematically impossible..
This makes axioms necessary to be a functional sapient entity. Yet axioms are thus incredibly precious, and not to be squandered or proliferated lightly.
To hold as axiomatic that there exists some elan vital of "intent" that the room lacks, but a clearly analogous system in the human brain itself possesses, strains credulity to say the least. If two models of the world have the same explanatory power, and do not produce measurable differences in expectation, the sensible thing to do is adopt the one that is more parsimonious.
(It would help if more philosophers had even a passing understanding of Algorithmic Information Theory)
Why not? What exactly breaks if we ask that the creator of the Room makes it so?
It is already a very unwieldy object, a pure look-up-table that could converse in Chinese is an enormous thing. Or is it such an onerous ask that we go beyond "Hello, where is the library?" in Chinese? You've already shot for the moon, what burden the stars?
If the Room can equipped to productively answer questions that require knowledge of the inner mechanisms of the Room, then the problem is solved.
For consciousness? Maybe. I'd be surprised if we never got an answer to it, and a mechanistic one to boot. Plenty of mysterious and seemingly ontologically basic phenomenon have crumbled under empirical scrutiny.
Non functionalists disagree that it is analogous. So you need to actually make that argument beyond "it is obviously so because it is so from the functionalist standpoint".
Moreover, you're defending two contradictory positions here.
On the one hand, you seem ready to concede to metaphysical skepticism and the idea that knowledge is impossible. On the other hand, you're using the Naive Empiricist idea that systems can only be considered to exist if they have measurable outcomes. These are not compatible.
If what you're doing is simply instrumentally using empiricism because it works, you must be ready to admit that there are truths that are possibly outside of its reach, including the inner workings of systems that contain hidden variables. Otherwise you are not a skeptic.
It requires the assumption that cognition is reducible to computation, which makes the entire experiment useless as a prop to discuss whether that view is or isn't satisfactory. It turns it into a tautology.
If computationalism is true, computationalism is true.
You should be careful with that line of thinking.
Surely you must be familiar with the story of Lord Kelvin's speech to the Royal Society inwhich he stipulated that Physics was now almost totally complete save for two small clouds.
Explaining those "small" issues would of course end up requiring the creation of special relativity and quantum mechanics, which were neither small tasks, nor ultimately complete to this day and unearthed a lot more problems along the way.
Whatever one thinks of our epistemic position, I always recommend humility.
On the flip side... how is the thought experiment helping illustrate anything to anyone who doesn't already agree with Searle's take? It's as if he's saying "...and obviously the room doesn't know anything so functionalism is wrong."
One man's modus ponens is another man's modus tollens. 🤷
I believe the original intent was indeed that the concept of the room being intelligent should be absurd enough as to discredit the idea of functionalism. As in specifically designing something that is, ostensibly, fake would still pass the bar for what people would consider artificially intelligent.
The popularity of the thought experiment is, I think, a good example of a scissor statement where depending on your metaphysical outlook you will be puzzled that anybody could ever think the room is or isn't speaking Chinese.
Hence to disregard the experiment as fruitless is a mistake in my view, it's interesting precisely because it generates wildly different certainties.
More options
Context Copy link
More options
Context Copy link
I am large, I contain multitudes. As far as I'm concerned, there is no inherent contradiction behind my stance.
Knowledge without axiomatic underpinning is fundamentally impossible, due to infinite regress. Fortunately, I do have axioms and presumably some of them overlap with yours, or else we wouldn't have common grounds for useful conversation.
I never claimed being a "skeptic" as a label, that's your doing, so I can only apologize if it doesn't fit me. If there are truths beyond materialist understanding, regretfully we have no other way of establishing them. What mechanism ennobles non-materialists, letting them pick out Truth safe from materialism from the ether of all possible concepts? And how does it beat a random number generator that returns TRUE or FALSE for any conjecture under the sun?
I must then ask them to please demonstrate where a Chinese Room, presumably made of atoms, differs from human neurons, also made of atoms.
I reject your claim this is a tautology. A Chinese Room that speaks Chinese is a look-up table. A Chinese Room that speaks Chinese while talking about being a Chinese Room is a larger LUT. Pray tell what makes the former valid, and the latter invalid. Is self-referentiality verboten? Can ChatGPT not talk about matrix multiplication?
I'm all for epistemic humility, but I fail to see the relevance here. It's insufficient grounds for adding more ontologically indivisible concepts to the table than are strictly necessary, and Searle's worldview doesn't even meet necessity, let alone strictness.
There's epistemic humility, and there's performative humility, a hemming and hawing and wringing of hands that we just can't know that things are the way they seem, there must be more, and somehow this validates my worldview despite it having zero extra explanatory power.
Please understand that words refer to concepts, in this case the specific metaphysical position that you claim to adhere to, which is incompatible with materialism.
Now, since you seem to claim to be an instrumental materialist only, which I provided for in my statement, you can't, in good faith, refute anti-materialism from a materialist standpoint. Since you have renounced your claim to the truth and no set of axiom is privileged.
You can do it from within its own framework, or you can simply conjecture. But that doesn't seem to be what you're doing here.
The same as materialists. Philosophy.
We have established that disagreeing with someone's axioms doesn't entitle you to any sort of metaphysical high ground, have we not?
You can't assume that cognition can be reduced to computation, this has to be argued. I mean you can assume it if you want, but then it is a tautology.
The fact that Searles did not make this assumption as part of his statement of the thought experiment. If by validity you mean relevance. I don't see the point in discussing the tautological version of the thought experiment. And neither do you since the initial impulse of this conversation is that it would be obviously useless.
Given I've not actually provided my view on this topic here, I don't see how I could be engaging in this, if that's what you're trying to imply.
There can be more, and you're acting with a certainty that does not recognize this, which I find unbecoming.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Your tests have the exact same "problem" as the Turing Test, though. There's no way to tell if the bot actually "understands" Japanese just because it is able to produce Japanese words that are understandable to Japanese people after interacting with Japanese people a bunch. There's no way to tell if the bot actually "understands" physics just because it responds to an equation with symbols that a learned physicist would judge as "correct" after interacting with a bunch of physics textbooks or lectures or whatever. It could just be updating the mappings within its Chinese room.
One might say that updating the mappings in its Chinese room is essentially another way of describing "understanding." In which case the Turing Test also qualifies; if the chatbot is able to update its mappings during its conversation with you such that it appears to you as indistinguishable from a human, then that's equivalent to a bot updating its mappings through its conversations with Japanese people such that it appears to Japanese people as someone who understands Japanese.
I guess the point of my conjecture is that understanding is required for intelligence. And one way to get after intelligence is putting an agent in a situation where it has no previous experience or models to work from and expect it to solve problems.
Where I agree with the idea behind the Chinese Room is exactly that. Yes, the agent can answer questions about the things it’s supposed to be able to answer questions about well enough to fool an onlooker asking questions about the subject it’s been trained to answer. But if you took the same agent and got it off script in some way — if you stopped asking about the Chinese literature it was trained to answer questions about and started asking questions about Chinese politics or the weather or the Kansas City Chiefs, an agent with no agency that doesn’t actually have a mental model of what the characters it’s matching actually mean will be unable to adapt. It cannot answer the new questions because it specifically doesn’t understand any of tge old questions nor can it understand the new ones. And likewise if the questions in English are not understood it would be impossible to get the agent to understand Japanese because it’s unable to derive meanings from words, it’s just stringing them together in ways that it’s training tells it are pleasing to users.
It’s also a pretty good test for human understanding of a given subject. If I can get you to attempt to use the information you have in a novel situation and you can do so, you understand it. If you can only regurgitate things you have been told in exactly the ways you have been told to do it, you probably don’t.
Perhaps I'm not as familiar with the Chinese Room experiment as I thought I was. I thought the Chinese Room posited that the room contained mappings for literally every single thing that could be input in Chinese, such that there was literally nothing a Chinese person outside the room could state such that a response indicated a lack of understanding of Chinese? If the Chinese Room posits that the mappings contained in the room are limited, then that does change things, but then I also believe it's not such a useful thought experiment.
I personally don't think "understanding," at least the way we humans understand (heh) it, is a necessary component of intelligence. I'm comfortable with calling the software that underlies the behavior of imps in Doom as "enemy artificial intelligence," even though I'm pretty sure there's no "understanding" going on in my 486 Thinkpad laptop that's causing the blobs of brown pixels to move on the screen in a certain way based on what my player character is doing, for instance. If it talks like a duck and walks like a duck and is otherwise completely indistinguishable from a duck in every way that I can measure, then I'll call it a duck.
Yeah, to tie this back to the above thread about the ramifications of massively-increased automation, what the hell does it matter if an AI really understands anything, if it puts most of us out of a job anyways? Philosophy is for those who don't have to grind for their bread.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link