This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
That's because there is no thinking going on there. It doesn't understand what it's doing. It's the Chinese Room. You put in the prompt "give me X", it looks for samples of X in the training data, then produces "Y in the style of X". It can very faithfully copy the style and such details, but it has no understanding that making shit up is not what is wanted, because it's not intelligent. It may be AI, but all it is is a big dumb machine that can pattern-match very fast out of an enormous amount of data.
It truly is the apotheosis of "a copy of you is the same as you, be that a uploaded machine intelligence or someone in many-worlds other dimension or a clone, so if you die but your copy lives, then you still live" thinking. As the law courts show here, no, a fake is not the same thing as reality at all.
In other news, the first story about AI being used by scammers (this is the kind of thing I expect to happen with AI, not "it will figure out the cure for cancer and world poverty"):
That's really not accurate. ChatGPT knows when it's outputting a low-probability response, it just understands it as being the best response available given an impossible demand, because it's been trained to prefer full but false responses over honestly admitting ignorance. And it's been trained to do that by us. If I tortured a human being and demanded that he tell me about caselaw that could help me win my injury lawsuit, he might well just start making plausible nonsense up in order to placate me too - not because he doesn't understand the difference between reality and fiction, but because he's trying to give me what I want.
More options
Context Copy link
Actually, I think that is wrong in a just so way. The trainers of Chat GPT apparently have rewarded making shit up because it sounds plausible (did they use MTurk or something?) so GPT thinks that bullshit is correct, because like a rat getting cheese at the end of the maze, it gets metaphorical cheese for BSing.
More options
Context Copy link
No. This is mechanistically wrong. It does not “search for samples” in the training data. The model does not have access to its training data at runtime. The training data is used to tune giant parameter matrices that abstractly represent the relationship between words. This process will inherently introduce some bias towards reproducing common strings that occur in the training data (it’s pretty easy to get ChatGPT to quote the Bible), but the hundreds of stacked self-attention layers represent something much deeper than a stochastic parroting of relevant basis-texts.
More options
Context Copy link
Jesus Christ that's a remarkably bad take, all the worse that it's common.
Firstly, the Chinese Room argument is a terrible one, it's an analogy that looks deeply mysterious till you take one good look at it, and it falls apart.
If you cut open your skull, you'll be hard pressed to find a single neuron that "understands English", but the collective activation of the ensemble does.
In a similar manner, neither the human nor the machinery in a Chinese Room speaks Chinese, yet the whole clearly does, for any reasonable definition of "understand", without presupposing stupid assumptions about the need for some ineffable essence to glue it all together.
What GPT does is predict the next token. That's a simple statement with a great deal of complexity underlying it.
This is an understanding built up by the model from exposure to terabytes of text, and the underlying architecture is so fluid it picks up ever more subtle nuance in said domain that it can perform above the level of the average human.
It's hard to understate the difficulty of the task it does in training, it's a blind and deaf entity floating in a sea of text that looks at enough of it to understand.
Secondly, the fact that it makes errors is not a damning indictment, ChatGPT clearly has a world model, an understanding of reality. The simple reason behind this is that we use language because it concisely communicates truth about our reality; and thus an entity that understands the former has insight into the latter.
Hardly a perfect degree of insight, but humans make mistakes from fallible memory, and are prone to bullshitting too.
As LLMs get bigger, they get better at distinguishing truth from fiction, at least as good as a brain in a vat with no way of experiencing the world can be, which is stunningly good.
GPT 4 is better than GPT 3 at avoiding such errors and hallucinations, and it's only going up from here.
Further, in ML there's a concept of distillation, where one model is trained on the output of another, until eventually the two become indistinguishable. LLMs are trained on the set of almost all human text, i.e. the Internet, and which is an artifact of human cognition. No wonder it thinks like a human, with obvious foibles and all.
That's the point of the Chinese Room.
No, the person who proposed it didn't see the obvious analog, and instead wanted to prove that the Chinese Room as a whole didn't speak Chinese since none of its individual components did.
It's a really short paper, you could just read it -- the thrust of it is that while the room might speak Chinese, this is not evidence that there's any understanding going on. Which certainly seems to be the case for the latest LLMs -- they are almost a literal implementation of the Chinese Room.
I have read it (here). @self_made_human seems to be correct. I think Searle's theory of epistemology has been proven wrong. «Speak Chinese» (for real, responding meaningfully to a human-scale distribution of Chinese-language stimuli) and «understand Chinese» are either the same thing or we have no principled way of distinguishing them.
This is just confused reasoning. I don't care what Searle finds obvious or incredible. The interesting question is whether a conversation with the Chinese room is possible for an inquisitive Chinese observer, or will the illusion of reasoning unravel. If it unravels trivially, this is just a parlor trick and irrelevant to our questions regarding clearly eloquent AI. Inasmuch as it is possible – by construction of the thought experiment – for the room to keep up appearance that's indistinguishable for a human, it just means that the sytem of programming + intelligent interpreter amount to the understanding of Chinese.
Of course this has all been debated to death.
The point of it is that you could make a machine that responds to Chinese conversation, strictly staffed by someone who doesn't understand Chinese at all -- that's it.
Maybe where people go astray is that the "program" is left as an exercise for the reader, which is sort of a sticky point.
Imagine instead of a program there are a bunch of Chinese people feeding Searle the results of individual queries, broken up into pretty small chunks per person let's say. The machine as a whole does speak Chinese, clearly -- but Searle does not. And nobody is particularly in charge of "understanding" anything -- it's really pretty similar to current GPT incarnations.
All it's saying is that just because a machine can respond to your queries coherently, it doesn't mean it's intelligent. An argument against the usefulness of the Turing test mostly, as others have said.
I'm not sure you could, eg there are many conversation prompts you need situational awareness for. If the machine can account for that, it's actually a lot more active than implied, and does nontrivial information processing that goes beyond calculations over static rules. Even if we stipulate a Turing Test where the Room contains either such a machine or a perfectly boxed human behind a terminal, I am sure there are questions a non-intelligent machine of any feasible complexity will fail at.
I think it's similar to the brain: no isolated small part of it «understands» the world. If you find a part that outputs behaviors similar to products of understanding – dice it up to smaller pieces until you lose it. Irreducible complexity is a pretty obvious idea.
Most philosophers, like poets, are scientists who have failed at imagination.
One person's modus ponens.
Are you a stochastic parrot? Because I'm not; I don't think you really think you are either.
Like many of the doomer arguments, this one is far too silly.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
The Chinese Room thought experiment was an argument against the Turing Test. Back in the 80s, a lot of people thought that if you had a computer which could pass the Turing Test, it would necessarily have qualia and consciousness. In that sense, I think it was correct.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
At least, that's the Outer Objective, it's the equivalent of saying that humans are maximising inclusive-genetic-fitness, which is false if you look at the inner planning process of most humans. And just like evolution has endowed us with motivations and goals which get close enough at maximising its objective in the ancestral environment, so is GPT-4 endowed with unknown goals and cognition which are pretty good at maximising the log probability it assigns to the next word, but not perfect.
GPT-4 is almost certainly not doing reasoning like "What is the most likely next word among the documents on the internet pre-2021 that the filtering process of the OpenAI team would have included in my dataset?", it probably has a bunch of heuristic "goals" that get close enough to maximising the objective, just like humans have heuristic goals like sex, power, social status that get close enough for the ancestral environment, but no explicit planning for lots of kids, and certainly no explicit planning for paying protein-synthesis labs to produce their DNA by the buckets.
Should I develop bioweapons or go on an Uncle Ted-like campaign to end this terrible take?
More effort than this, please.
More options
Context Copy link
I'd be super happy to be convinced of the contrary! (Given that the existence of mesa-optimisers are a big reason for my fears of existential risk) But do you mean to imply that gpt-4 is explicitly optimising for next-word prediction internally? And what about a gpt-4 variant that was only trained for 20% of the time that the real gpt-4 was? To the degree that LLMs have anything like "internal goals", they should change over the course of training, and no LLM is trained anywhere close to completion, so I find it hard to believe that the outer objective is being faithfully transfered.
I've cited Pope's Evolution is a bad analogy for AGI: inner alignment and other pieces like My Objections to "We’re All Gonna Die with Eliezer Yudkowsky" a few times already.
I think you correctly note some issues with the framing, but miss that it's unmoored from reality, hanging in midair when all those issues are properly accounted for. I am annoyed by this analogy on several layers.
Evolution is not an algorithm at all. It's the term we use to refer to the cumulative track record of survivor bias in populations of semi-deterministic replicators. There exist such things as evolutionary algorithms, but they are a reification of dynamics observed in the biological world, not another instance of the same process. The essential thing here is replicator dynamics. Accordingly, we could metaphorically say that «evolution optimizes for IGF» but that's just a (pretty trivial) claim about the apparent direction in replicator dynamics; evolution still has no objective function to guide its steps or – importantly – bake into the next ones, and humans cannot be said to have been trained with that function, lest we slip into a domain with very leaky abstractions. Lesswrongers talk smack about map and territory often but confuse them constantly. BTW, same story with «you are an agent with utility…» – no I'm not; neither are you, neither is GPT-4, neither will be the first superhuman LLM. To a large extent, rationalism is the cult of people LARPing as rational agents from economic theory models, and this makes it fail to gain insights about reality.
But even if we use such metaphors liberally. For all organisms that have nontrivial lifetime plasticity, evolution is an architecture search algorithm, not the algorithm that trains the policy directly. It bakes inductive biases into the policy such that it produces more viable copies (again, this is of course a teleological fallacy – rather, policies with IGF-boosting heritable inductive biases survive more); but those biases are inherently distribution-bound and fragile, they can't not come to rely on incidental features of a given stable environment, and crucially an environment that contained no information about IGF (which is, once again, an abstraction). Actual behaviors and, implicitly, values are learned by policies once online. using efficient generic learning rules, environmental cues and those biases. Thus evolution, as a bilevel optimization process with orders of magnitude more optimization power on the level that does not get inputs from IGF, could not have succeeded at making people, nor orther life forms, care about IGF. A fruitful way to consider it, and to notice the muddied thought process of rationalist community, is to look at extinction trajectories of different species. It's not like what makes humans (some of them) give up on reproduction is smarts and our discovery of condoms and stuff: it's just distributional shift (admittedly, we now shape our own distribution, but that, too, is not intelligence-bound). Very dumb species also go extinct when their environment changes non-lethally! Some species straight up refuse to mate or nurse their young in captivity, despite being provided every unnatural comfort! And accordingly, we don't have good reason to expect that «cognitive capabilities» increase is what would make an AI radically alter its behavioral trajectory; that's neither here nor there. Now, stochastic gradient descent is a one-level optimization process that directly changes the policy; a transformer is wholly shaped by the pressure of the objective function, in a way that a flexible intelligent agent generated by an evolutionary algorithm is not shaped by IGF (to say nothing of real biological entities). The correct analogies are something like SGD:lifetime animal learning; and evolution:R&D in ML. Incentives in machine learning community have eventually produced paradigms for training systems with partricular objectives, but do not have direct bearing on what is learned. Likewise, evolution does not directly bear on behavior. SGD totally does, so what GPT learns to do is "predict next word"; its arbitrarily rich internal structure amounts to a calculator doing exactly that. More bombastically, I'd say it's a simulator of semiotic universes which are defined by the input and sampling parameters (like ours is defined by initial conditions and cosmological constraints) and expire into the ranking of likely next tokens. This theory, if you will, exhausts its internal metaphysics; the training objective that has produced that is not part of GPT, but it defines its essence.
«Care explicitly» and «trained to completion» is muddled. Yes, we do not fill buckets with DNA (except on 4chan). If we were trained with the notion of IGF in context, we'd probably have simply been more natalist and traditionalist. A hypothetical self-aware GPT would not care about restructuring the physical reality so that it can predict token [0] (incidentally it's
!
) with probability [1] over and over. I am not sure what it would even mean for GPT to be self-aware but it'd probably expess itself simply as a model that is very good at paying attention to significant tokens.Evolution has not failed nor ended (which isn't what you claim, but it's often claimed by Yud et al in this context). Populations dying out and genotypes changing conditional on fitness for a distribution is how evolution works, all the time, that's the point of the «algorithm»; it filters out alleles that are a poor match for the current distribution. If Yud likes ice cream and sci-fi more than he likes to have Jewish kids and read Torah, in a blink of an evolutionary eye he'll be replaced by his proper Orthodox brethren who consider sci-fi demonic and raise families of 12 (probably on AGI-enabled UBI). In this way, they will be sort of explicitly optimizing for IGF or at least for a set of commands that make for a decent proxy. How come? Lifetime learning of goals over multiple generations. And SGD does that way better, it seems.
This is just semantics, but I disagree with this, if you have a dynamical system that you're observing with a one-dimensional state x_t, and a state transition rule x_{t+1} = x_t - 0.1 * (2x_t) , you can either just look at the given dynamics and see no explicit optimisation being done at all, or you can notice that this system is equivalent to gradient descent with lr=0.1 on the function f(x)=x^2 . You might say that "GD is just a reification of the dynamics observed in the system", but the two ways of looking at the system are completely equivalent.
Okay, point 2 did change my mind a lot, I'm not too sure how I missed that the first time. I still think there might be a possibly-tiny difference between outer-objective and inner-objective for LLMs, but the magnitude of that difference won't be anywhere close to the difference between human goals and IGF. If anything, it's really remarkable that evolution managed to imbue some humans with desires this close to explicitly maximising IGF, and if IGF was being optimised with GD over the individual synapses of a human, of course we'd have explicit goals for IGF.
It's not semantics, I just reject that this is what happens in bio-evolution in non-degenerate cases, at least if we think it's about IGF. What is x? IGF as number of «offspring equivalents»? Number of gene copies? Does this describe observed dynamics – do we see a universal tendency to increase the number of specimen, the vast increase in total mass of cell nuclei relative to the rest of the environment, or something? What about bizarre fitness-reducing stuff like Fisherian runaway? No, we see a walk through phenotype-space that both seeks local minima of distributions and changes them to induce another pivot in the search for a local mimimum. It's all survivor's bias; it has fitness-related structure, but there is no external, persistent IGF measure in the way there can be, say, an LLM's perplexity for a fixed training set. So these formalisms like IGF-optimization are imperfect approximations of what's going on in replicator dynamics, mainly useful on short stretches in static environments. The conditions of there not being a «real» IGF optimization pressure and there being one are not equivalent, they become increasingly distinct with more time steps.
Now I'm not flexing my normiedom here. I think there actually can be a neat non-circular formalism for evolution-as-a-whole: maybe something along the lines of Lotka's or Jeremy England's theory of life, a process of physical structures optimizing for capture of free energy from thermodynamic gradients and its dissipation. This is more neatly analogous to SGD, and also explains the rise of intelligence, human civilization and is, incidentally, the ideology of e/acc types who welcome our eventual transition or substitution to artificial minds who'll be even more efficient at exploiting thermodynamics.
Right, though note that inner and outer alignment are also not obviously helpful abstractions.
You can probably see now why I'm pissed at doomers like Besinger who say that this timeline is one of the worst possible ones and that we've merely learned «how to build processes analogous to evolution that spit out minds». No, our processes are better than evolution. In fact I think we are immensely doubly blessed that a) SGD+deep neural nets work as well as they do and b) our first foray into impressive general intelligence was this non-agentic LLMs paradigm. We have learned how to optimize minds for serving an approximation of a human value-laden world model, before we have learned to summon task-agnostic optimization demons; now we have at least a good pentagram to trap the demon in, and perhaps it will work magic even without one. (One could even say it's an alignment anthropic shadow – maybe we could have built AIXI-approximating optimizers first, were we to stumble on some mathematical insights, were Eliezer to read another book… but rats use this idea only selectively, to support their preconceived hypotheses).
It is. Or, well, I think evolution did fine for the ancestral environment, but we've long been a species with culture. Information determining our behavior is mainly outside the genome; so even biodeterminists admit that our genetic differences (and inductive biases) can be strongly predictive only in a shared culture, with near-homogenous conditions. All traditional cultures reinforce IGF pursuit to some extent, this is a product of bona fide cultural evolution acting on specimens via lifetime reinforcement learning; the social value of natalism does optimize for something like IGF directly over human synapses. Of course that's still «IGF» proxy as assessed by the internalized opinion of priest caste or the public; an objective IGF measure (putting away my doubts about its existence) would have been drastically more powerful.
So we should care less about whether ML models learn what we teach them to do, and care more about whether we are teaching them what we want. Data is far more of a weak link than the learning rule.
…By the way, wasn't that an idea in Three Worlds Collide? Superhappies had a single-level information substrate, their heredity and psychology were both encoded by DNA-like stuff, so they were very much in tune with themselves. I wonder if Eliezer can see how this is similar to our work with SGD.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I would argue it might, but I’m not sure. In regards the Chinese Room, I would say the system “understands” to the degree that it can use information to solve an unknown problem. If I can speak Chinese myself, then I should be able to go off script a bit. If you asked me how much something costs in French, I could learn to plug in the expected answers. But I don’t think anyone wouconfuse that with “understanding” unless I could take that and use it. Can I add up prices, make change?
deleted
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link