This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
It's a really short paper, you could just read it -- the thrust of it is that while the room might speak Chinese, this is not evidence that there's any understanding going on. Which certainly seems to be the case for the latest LLMs -- they are almost a literal implementation of the Chinese Room.
I have read it (here). @self_made_human seems to be correct. I think Searle's theory of epistemology has been proven wrong. «Speak Chinese» (for real, responding meaningfully to a human-scale distribution of Chinese-language stimuli) and «understand Chinese» are either the same thing or we have no principled way of distinguishing them.
This is just confused reasoning. I don't care what Searle finds obvious or incredible. The interesting question is whether a conversation with the Chinese room is possible for an inquisitive Chinese observer, or will the illusion of reasoning unravel. If it unravels trivially, this is just a parlor trick and irrelevant to our questions regarding clearly eloquent AI. Inasmuch as it is possible – by construction of the thought experiment – for the room to keep up appearance that's indistinguishable for a human, it just means that the sytem of programming + intelligent interpreter amount to the understanding of Chinese.
Of course this has all been debated to death.
The point of it is that you could make a machine that responds to Chinese conversation, strictly staffed by someone who doesn't understand Chinese at all -- that's it.
Maybe where people go astray is that the "program" is left as an exercise for the reader, which is sort of a sticky point.
Imagine instead of a program there are a bunch of Chinese people feeding Searle the results of individual queries, broken up into pretty small chunks per person let's say. The machine as a whole does speak Chinese, clearly -- but Searle does not. And nobody is particularly in charge of "understanding" anything -- it's really pretty similar to current GPT incarnations.
All it's saying is that just because a machine can respond to your queries coherently, it doesn't mean it's intelligent. An argument against the usefulness of the Turing test mostly, as others have said.
I'm not sure you could, eg there are many conversation prompts you need situational awareness for. If the machine can account for that, it's actually a lot more active than implied, and does nontrivial information processing that goes beyond calculations over static rules. Even if we stipulate a Turing Test where the Room contains either such a machine or a perfectly boxed human behind a terminal, I am sure there are questions a non-intelligent machine of any feasible complexity will fail at.
I think it's similar to the brain: no isolated small part of it «understands» the world. If you find a part that outputs behaviors similar to products of understanding – dice it up to smaller pieces until you lose it. Irreducible complexity is a pretty obvious idea.
Most philosophers, like poets, are scientists who have failed at imagination.
One person's modus ponens.
Are you a stochastic parrot? Because I'm not; I don't think you really think you are either.
Like many of the doomer arguments, this one is far too silly.
It's not a doomer argument, it's just a snarky response to an obvious bullshit idea. Bender&Gebru's paper that introduced the concept is very bad and unduly dismissive of AI (and of entire fields in philology, linguistics, semiotics and computer science); this becomes clearer with every new result. Like, generalization of instruction-following between different languages – we can explain that for humans, using notions like semantic roles. Do you feel like that's possible to operate on, for a Chinese Room style arrangement that thoughtlessly goes through rules? If it is, then it's using abstractions in a very humanlike way; so how can such a Chinese Room indicate absence of understanding? If it isn't, then what do we call it – a metalinguistic parrot, or a true intelligence? What is thought, really, if not that cognition which transcends language?
For me this shit is actually hard and non-obvious; I take these ideas seriously enough to doubt gut feelings. I do not know if I am a stochastic parrot or what it even means to be one. I am not sure stochastic parrots, as understood by Gebru and endowed with general language ability, exist at all; or if they do, whether non-parrots may exist, at least among humans; like sama hints at here. Hlynka dunks on me and @2rafa for entertaining the possibility that human thought is largely analogous to LLM's parroting, but it's hard to feel insulted by that. I pride myself on possessing refined enough self-awareness to notice mechanical aspects to my own mind, though it's presumably a far cry from enlightenment. I sometimes identify people as NPCs because of noticing signs of a kindred spirit in their revealed thought.
Me having an internal psychological state that, at least, feels much more complex and alive (and what dreams I have! Human media would not be ready for their depth!) than my behavioral outputs has zero bearing on whether an LLM is a stochastic parrot or not. Just because I can crack it open and understand its activity as a big lump of pretty trivial and comprehensible computations doesn't mean that it doesn't have what amounts to some analogy to my internal state. My own skull could also be cracked open and its contents studied as a mess of neuron activations.jpg; our only obstacles to doing that are logistical and perhaps ethical, especially seeing as Neuralink got a greenlight for human trials from FDA.
I am sometimes accused of opposite things, like extreme materialism or metaphysical whimsy, Russian chauvinism or kowtowing to the West, being a fascist knuckledragger or an effete European intellectual twink (sometimes by the same people). It wouldn't surprise me if some started to believe me a stochastic parrot too. Nuance and uncertainty are penalized.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link