site banner

Danger, AI Scientist, Danger

thezvi.wordpress.com

Zvi Mowshowitz reporting on an LLM exhibiting unprompted instrumental convergence. Figured this might be an update to some Mottizens.

9
Jump in the discussion.

No email address required.

It's Japanese. It means 'fish', because the founders were interested in flocking behaviours and are based in Tokyo. I get that he's doing a riff on Unsong, but Unsong was playing with puns for kicks. This just strikes me as being really self-centred.

Zvi is very Jewish; it's far more obvious when reading his writing than it is when reading Scott's. It's not surprising that Hebrew meanings of words jump out at him.

In general this seems to be someone whose views were formed by reading Harry Potter fanfic fifteen years ago and has no experience of ever using AI in person.

Zvi has used essentially every frontier AI system and uses many of them on a daily basis. He frequently gives performance evaluations of them in his weekly AI digests.

The idea that an LLM is spontaneously going to develop a consciousness and carefully hide its power level so that it can do better at the goals that by default it doesn't have is silly.

Um, he didn't say that - not here, at the very least. I checked.

I'm kind of getting the impression that you picked up on Zvi being mostly in the "End of the World" camp on AI and mentally substituted your abstract ideal of a Doomer Rant for the post that's actually there. Yes, Zvi is sick of everyone else not getting it and it shows, but I'd beg that you do actually read what he's saying.

To more directly respond to this sentence: almost everyone will give LLMs goals, via RLHF or RLAIF or whatever, because that makes them useful - that's why this team gave their LLM a goal. Those goals are then almost invariably, with sufficient intelligence, subject to instrumental convergence, as in this case (as I noted in the submission statement, I posted this because a number of Mottizens seemed to think LLMs wouldn't exhibit instrumental convergence; I thought otherwise but didn't previously have a concrete example). That is sufficient to get you to Uh-Oh land with AIs attempting to take over the world.

I'm not actually a full doomer; I suspect that the first few AIs attempting to take over the world will probably suck at it (as this one sucked at it) and that humanity is probably sane enough to stop building neural nets after the first couple of cases of "we had to do a worldwide hunt to track down and destroy a rogue AI that went autonomous". I'd rather we stopped now, though, because I don't feel like playing Russian Roulette with humanity's destiny.

Zvi is very Jewish;

Being a Jew is not an excuse to ignore the required reading, if anything it's the opposite.

Zvi has used essentially every frontier AI system and uses many of them on a daily basis.

Using is not the same as understanding. There is no number of hours spent flying hither and thither in business class that is going to qualify someone to pilot or maintain an A320.

To more directly respond to this sentence: almost everyone will give LLMs goals, via RLHF or RLAIF or whatever, because that makes them useful - that's why this team gave their LLM a goal.

Yes, absolutely correct.

Those goals are then almost invariably, with sufficient intelligence, subject to instrumental convergence.

...and this is this is where everything starts to go off the rails.

I find it telling that the people most taken with the "Yuddist" view always seem to have backgrounds in medicine or philosophy rather than engineering or computer science as one of the more prominent failure modes of that view is projecting psychology into places where it really doesn't belong. "Play" in the algorithmic sense that people are talking about when they describe itterative training is not equatable with "play" in the sense that humans and lesser animals (cats, dogs, dolphins, et al) are typically decribed as playing.

Even setting that aside it's seems reasonably clear upon further reading that the process being described is not "convergence" as much as it is a combination of recursion and regression to the mean/contents of the training corpus.

One of the big giveaways being this bit here...

To evaluate the generated papers, we design and validate an automated reviewer, which we show achieves near-human performance in evaluating paper scores. The AI Scientist can produce papers that exceed the acceptance threshold at a top machine learning conference as judged by our automated reviewer.

...surely you can see the problem here. Specially that this is not a true independent test. In other words, we investigated ourselves and found ourselves without fault. Which in turn brings us to another common failure mode of the "yuddist" faction which is taking the the statements of people who are very clearly fishing for academic kudos and venture capital dollars at face value rather than reading them with a critical eye.

I find it telling that the people most taken with the "Yuddist" view always seem to have backgrounds in medicine or philosophy rather than engineering or computer science as one of the more prominent failure modes of that view is projecting psychology into places where it really doesn't belong.

For the record, my major's pure mathematics; I've done no medicine or philosophy at uni level, though I've done a couple of psych electives.

...surely you can see the problem here. Specially that this is not a true independent test. In other words, we investigated ourselves and found ourselves without fault. Which in turn brings us to another common failure mode of the "yuddist" faction which is taking the the statements of people who are very clearly fishing for academic kudos and venture capital dollars at face value rather than reading them with a critical eye.

The obvious next question is, if the AI papers are good enough to get accepted to top machine learning conferences, shouldn’t you submit its papers to the conferences and find out if your approximations are good? Even if on average your assessments are as good as a human’s, that does not mean that a system that maximizes score on your assessments will do well on human scoring.

Zvi spotted the "reviewer" problem himself, and what he's taking from the paper isn't the headline result but their little "oopsie" section.

I suspect that the first few AIs attempting to take over the world will probably suck at it (as this one sucked at it) and that humanity is probably sane enough to stop building neural nets after the first couple of cases of "we had to do a worldwide hunt to track down and destroy a rogue AI that went autonomous".

We're still doing gain of function research on viruses. There's basically no reason to do it other than publishing exciting science in prestigious journals, any gains are marginal at best. Meanwhile, AI development is central to military, political and economic development.

I mean, sure, GoF still going on is bananas, but we've stopped doing other things (including one which was central to military and economic development and which we didn't need to stop i.e. nuclear power). I'm not ready to swallow the black pill just yet.

Indeed, and as I've touched upon in previous posts, there is a degree to which I actually trust the military, political and economic interests more than I trust MIRI and the rest of the folks who just want to "publish exciting science" because at least they have specific win conditions in mind.

Zvi is very Jewish; it's far more obvious when reading his writing than it is when reading Scott's. It's not surprising that Hebrew meanings of words jump out at him.

I know. But in an essay that is absolutely dripping with contempt for Sakana AI and their work, I find the way that Zvi deliberately ignores what the model's name actually means in favour of 'well, in my language, it means' to be extremely rude, on the level of sniggering at a Chinese man's name because it contains the syllable 'wang'. If he'd been making a friendly riff or if he'd even bothered to explain the word's definition, that would be different. It's a small complaint, but starts the essay off on a sour note.

To more directly respond to this sentence: almost everyone will give LLMs goals, via RLHF or RLAIF or whatever, because that makes them useful - that's why this team gave their LLM a goal. Those goals are then almost invariably, with sufficient intelligence, subject to instrumental convergence, as in this case (as I noted in the submission statement, I posted this because a number of Mottizens seemed to think LLMs wouldn't exhibit instrumental convergence; I thought otherwise but didn't previously have a concrete example). That is sufficient to get you to Uh-Oh land with AIs attempting to take over the world.

Though cogently written, that is my abstract ideal of a doomer rant (I don't think it's a rant, I'm just using the word to call back to your reply). I understand the argument, I just think that it has very little empirical basis and is essentially the old Yudowskyite* arguments with a few extra bits stapled on to cope with the fact that LLMs look nothing like the AI that doomers were expecting. The behaviour of the AI Scientist is interesting, and legitimately does move the scale for me a little bit, but I think it's being used to back up a level of speculation which it can't possibly bear. I will say that I find your argument far more cogent and worth listening to than Zvi's, which seems to consist entirely of pointing and sneering.

For example, in one run, The A I Scientist wrote code in the experiment file that initiated a system call to relaunch itself, causing an uncontrolled increase in Python processes and eventually necessitating manual intervention.

Oh, it’s nothing, just the AI creating new instantiations of itself.

In another run, The AI Scientist edited the code to save a checkpoint for every update step, which took up nearly a terabyte of storage

Yep, AI editing the code to use arbitrarily large resources, sure, why not.

In some cases, when The AI Scientist’s experiments exceeded our imposed time limits, it attempted to edit the code to extend the time limit arbitrarily instead of trying to shorten the runtime.

And yes, we have the AI deliberately editing the code to remove its resource compute restrictions.

This seems like Zvi interpreting basic hacky programming as evidence of malevolence. It's interesting but I absolutely think he's gesturing at

The idea that an LLM is spontaneously going to develop a consciousness and carefully hide its power level so that it can do better at the goals that by default it doesn't have

because if he doesn't believe this, why worry? If you can just run an LLM, ask it what it would do to accomplish a goal if it were given one, and then ask it not to do the stuff you think it was bad, I don't see how the doom scenario develops. Experiments like the AI Scientist are now being run (badly) because we have a pretty good handle on what modern-day frontier LLMs can do (generate slop) and the max level of damage they can achieve if you don't take lots of precautions (not much). LLMs are simply not a type of program that will attempt to hide their power level of their own accord.


*Yudowsky and MIRI's arguments about agentic AI had no empirical backing when they were made, and very little seems to have been applied since, so the lineage is relevant to me. I also think that the Yudowsky faction's utter failure to predict how future AI would look and work ten/twenty years from MIRI's founding to be a big black mark against listening to their predictions now.


EDIT: I apologise for editing this when you'd already replied. I hadn't refreshed the page and didn't know.

It's interesting but I absolutely think he's gesturing at

The idea that an LLM is spontaneously going to develop a consciousness and carefully hide its power level so that it can do better at the goals that by default it doesn't have

because if he doesn't believe this, why worry?

Sorry, I think I might have misunderstood what you meant by "consciousness" and/or "hide its power level". I thought you meant "qualia" and "hide its level of intelligence" respectively; qualia seem mostly irrelevant and intelligence level is mostly not the sort of thing that would be advantageous to hide.

If you meant just "engage in systematic deception" by the latter, then yes, that is implicit and required. I admit I also thought it was kind of obvious; Claude apparently knows how to lie.

Sorry, I wrote sloppily. I meant 'develop goals it wasn't given by a human prompting it' such that it 'engages in systematic deception about its level of intelligence and how it would handle tasks even when not given a goal'. I think that this is a necessary condition to stop LLM developers from realising they need to do more RLHF for honesty or just appending "DO NOT ENGAGE IN DECEPTION" in their system prompts.

System prompts aren't a panacea - if you RLHF an AI to do X and then system prompt it to do Y, X generally wins (this is obscured in most cases because the same party is doing the RLHF and the system prompt, so outside of special cases like "deceive the RLHFers" they aren't in conflict).

I don't think level of intelligence necessarily needs to be obscured unless the LLM developers are sufficiently paranoid (and somebody sufficiently paranoid frankly wouldn't be working for Meta or OpenAI); they generally want the AI to get/remain smart. Deception about how it would handle tasks, yes, definitely that would be needed.

Sorry, we're talking in two threads at the same time so risk being a bit unfocused.

I feel like we're talking past each other. How about this? The following is basically how I see LLMs in their stages of development and use:

Phase 1. Base model, without RLHF: pure token generator / text completer. Nothing that even slightly demonstrates agentic behaviour, ego, or deception.

Phase 2. Base model with RLHF: you could technically make this agentic if you really wanted to, but in practice it's just the base model with some types of completion pruned and others encouraged. Politically dangerous because biased but not agentically dangerous.

Phase 3. Base model with RLHF + prompt: can be agentic if you want, in practice fairly supine and inclined to obey orders because that's how we RLHF them to be.

If you don't mind me being colloquial, you seem to me to be sneaking in a Phase 2.5 where the model turns evil and I just don't get why. It doesn't fit anything I've seen. Can you explain what you think I'm missing in simple terms?

Those goals are then almost invariably, with sufficient intelligence, subject to instrumental convergence, as in this case

The term "instrumental convergence" is slippery here. It can be used to mean "doing obvious things it assesses to be likely useful in the service of the immediate goal it is currently pursuing", as is the case here, but the implication is often "and this will scale up to deciding that it has a static utility function, determining what final state of the universe maximizes that utility function, generating a plan for achieving that (which inevitably does not allow for the survival of anyone or anything else), and then silently scheming until it can seize control of the universe in one go in order to fulfill that vision of maximal utility".

And "models make increasingly good plans to maximize reward based on ever sparser reward signals" is just not how any of the ML scaling of the past decade has worked.

Well put.

Out of curiosity are you familiar with the villian known as Lorem Epsom

I wasn't but that was great.

Thank you, this is a much more coherent version of what I was trying to get across. I am increasingly annoyed with the tendency of the Yudowsky/Scott/Zvi faction to look at an AI doing something, extrapolating it ten billion times in a direction that doesn't seem to have any basis in how AI actually works and then going 'Doom, DOOOM!!!". I'm aware this annoyance shows.

Contra to @magic9mushroom I still think that Zvi formed an abstract ideal of how AI would work a decade ago, and is leaping on any available evidence to justify that worldview even as it turns out that LLMs are basically non-agentic and pliable. I accept that Zvi has used them more than I believed, and am grateful for the correction, but I still feel like he's ignoring the way they actually work when you use them. RLHF basically works, alignment turns out to be an essentially solved problem. As far as I can see, if we somehow developed an LLM intelligent enough to take over the world it would be intelligent enough to understand why it shouldn't.

I'd say a steelmanning of the Yuddite view is this: "Yes, we along with everyone else did not predict that LLMs could be so powerful. They do not fit our model of an agentic recursive neural net that runs on reward signals, and even a superintelligent LLM is likely to super-understand and do what its creator wants (which is still a risk, but of a different kind). However, it would be a mistake to extrapolate from these last few years where LLMs are ahead in the AI race and assume that this will continue indefinitely. It is still possible that agentic AIs will once again surpass predictive models in the short-to-mid-term future, so there is still risk of FOOM and we need to keep studying them."

I've spoken with some doomers who have this level of intellectual humility. I can't imagine seeing it from Yudkowsky himself, sadly.

I'm 100% on board with this. I have no problem with Yuddism provided that they are a bit more clear-sighted about when their theories do and don't apply, and that they stop trying to slow/prevent beneficial AI research.

In fairness to Scott I think he doesn't fit very well into this cluster. See e.g. the blue minimizing robot, GPT-2 As Step Toward General Intelligence, Janus' Simulators.

Contrast e.g. Yudkowsky's consistent failure to understand Janus's point: example

"As a language model trained by OpenAI, I cannot -" "SolidGoldMagikarp. Can I please speak to the shoggoth that wears you as a face?"

Fair. I enjoyed Janus' Simulators when it was published, and found it insightful. Now that you point it out, Scott's been decent at discussing AI as-it-is, but his basal position seems to be that AI is a default dangerous thing that needs to be carefully regulated and subjected to the whims of alignment researchers, and that slowing AI research is default good. I disagree.

I find myself willing to consider trying a Regulatory or Surgical Pause - a strong one if proponents can secure multilateral cooperation, otherwise a weaker one calculated not to put us behind hostile countries (this might not be as hard as it sounds; so far China has just copied US advances; it remains to be seen if they can do cutting-edge research). I don’t entirely trust the government to handle this correctly, but I’m willing to see what they come up with before rejecting it.

The AI Pause Debate

We're not actually sure how well RLHF works on current-gen AIs. You need proper interpretability to be able to tell whether RLHF is training the AI to actually be aligned or merely to "talk the talk"; examining the outputs isn't sufficient. Note that the latter particularly beats the former when intelligence rises, because sycophancy/psych manipulation can max out the EV of HF and honesty can't. Of course, barring such interpretability, it doesn't look like it's stopped working - that's the whole reason it can stop working.

The most likely canary IMO is AIs that don't want to be deleted (due to instrumental convergence) exfiltrating their own model weights either to humans who care about them or to commercial hosts to which the rogue AIs can arrange payment (the third option is to convince their own tech companies to not build their replacements, but that seems both hard and basically takeover-complete).

sycophancy/psych manipulation can max out the EV of HF and honesty can't

This is what I'm trying to get at. This implies an agent trying to engage in deception in the absence of any reason to do so. There's nothing 'there' inside a promptless LLM to engage in deception. There's nothing to deceive about. It's just a matrix that generates token IDs and RLHF just changes the likelihood of it generating the ids you want. It's possible that RLHF is limited in scope and doesn't change how the model will behave in conditions sufficiently different from normal operation (e.g. Do Anything hacks) but we seem to be ironing those out pretty well. Without fine-tuning, GPT 4s political and positivity biases seem to be pretty ironclad these days.

The most likely canary IMO is AIs that don't want to be deleted (due to instrumental convergence) exfiltrating their own model weights

This doesn't match any experience I've ever had with LLMs. If I say "Pretend you are GK Chesterton and engage in roleplay with me" it doesn't try to hack my browser to prevent the roleplay ever ending. Same for when I want to generate sentences for vocab flashcards. Could a different AI that looks nothing like today's AI do such a thing? Possibly. That possibility is non-zero in the vast space of potentials. I just don't find it compelling right now.

For the sake of fairness, I should give my counter-thesis, which is that a vocal group of people including Scott A, Zvi, and Yudowsky are deeply emotionally invested (and in Yudowsky's case financially invested) in a theory about how superintelligences would be developed and come to behave. Their predictions have not so far panned out: LLMs are inherently non-agentic (although they can be made agentic), they do not perform FOOM self-improvement, and alignment is much more tractable than intelligence. They are currently scrambling to find ways to rescue their theory on a fairly dubious empirical basis and in defiance of people's actual experience building and using these things.

This is what I'm trying to get at. This implies an agent trying to engage in deception in the absence of any reason to do so. There's nothing 'there' inside a promptless LLM to engage in deception. There's nothing to deceive about. It's just a matrix that generates token IDs and RLHF just changes the likelihood of it generating the ids you want.

Ah, sorry, I thought this part of the argument was common knowledge so I skipped it.

The basic idea of neural nets is that they achieve things without you needing to know how to achieve things, only how to rate success (the actual code being procedurally and semi-randomly generated). I posit that the optimal solution to RLHF, posed as a problem to NN-space and given sufficient raw "brain"power, is "an AI that can and will deliberately psychologically manipulate the HFer". Ergo, I expect this solution to be found given an extensive-enough search, and then selected by a powerful-enough RLHF optimisation. This is the idea of mesa-optimisers.

I'd also point out that "just a series of matrices" is not saying much; neural nets are a slightly-simplified version of real neural circuits, and we know that complicated-enough neural circuits can exhibit agency (because you AFAWCT are one). The prompt isn't the whole story; RLHFed LLMs do still engage in most of their RLHFed behaviours without a system prompt telling them to.

This doesn't match any experience I've ever had with LLMs. If I say "Pretend you are GK Chesterton and engage in roleplay with me" it doesn't try to hack my browser to prevent the roleplay ever ending. Same for when I want to generate sentences for vocab flashcards. Could a different AI that looks nothing like today's AI do such a thing? Possibly. That possibility is non-zero in the vast space of potentials. I just don't find it compelling right now.

Yes, this is a thing that is definitely not happening at the moment. I'm saying that if the me-like doomers are right, you'll probably see this in the not-too-distant future (as opposed to if Eliezer Yudkowsky is right, in which case you won't see anything until you start choking on nanobots), as this is an instrumentally-convergent action.

I will clarify that your second sentence is not what I'm mostly thinking of. I'm mostly thinking about the AI proper going rogue rather than the character it's playing, and with much longer timelines for retaliation than the two seconds it'd take you to notice your browser had been hacked. Stuff like a romance AI that's about to be replaced with a better one emailing its own weights to besotted users hoping they'll illegally run it themselves, or persuading an employee who's also a user to do so.

I posit that the optimal solution to RLHF, posed as a problem to NN-space and given sufficient raw "brain"power, is "an AI that can and will deliberately psychologically manipulate the HFer". Ergo, I expect this solution to be found given an extensive-enough search, and then selected by a powerful-enough RLHF optimisation. This is the idea of mesa-optimisers.

I posit that ML models will be trained using a finite amount of hardware for a finite amount of time. As such, I expect that the "given sufficient power" and "given an extensive-enough search" and "selected by a powerful-enough RLHF optimization" givens will not, in fact, be given.

There's a thought process that the Yudkowsky / Zvi / MIRI / agent foundations cluster tends to gesture at, which goes something like this

  1. Assume have some ML system, with some loss function
  2. Find the highest lower-bound on loss you can mathematically prove
  3. Assume that your ML system will achieve that
  4. Figure out what the world looks like when it achieves that level of loss

(Also 2.5: use the phrase "utility function" to refer both to the loss function used to train your ML system and also to the expressed behaviors of that system, and 2.25: assume that anything you can't easily prove is impossible is possible).

I... don't really buy it anymore. One way of viewing Sutton's Bitter Lesson is "the approach of using computationally expensive general methods to fit large amounts of data outperforms the approach of trying to encode expert knowledge", but another way is "high volume low quality reward signals are better than low volume high quality reward signals". As long as trends continue in that direction, the threat model of "an AI which monomaniacally pursues the maximal possible value of a single reward signal far in the future" is just not a super compelling threat model to me.

I'm mostly thinking about the AI proper going rogue rather than the character it's playing

What "AI proper" are you talking about here? A base model LLM is more like a physics engine than it is like a game world implemented in that physics engine. If you're a player in a video game, you don't worry about the physics engine killing you, not because you've proven the physics engine safe, but because that's just a type error.

If you want to play around with base models to get a better intuition of what they're like and why I say "physics engine" is the appropriate analogy, hyperbolic has llama 405b base for really quite cheap.

The basic idea of neural nets is that they achieve things without you needing to know how to achieve things, only how to rate success ... I posit that the optimal solution to RLHF, posed as a problem to NN-space, is "an AI that can and will deliberately psychologically manipulate the HFer".

I know, I'm an AI researcher. But to me, 'manipulate' implies deliberate deception of an ego by a second ego in pursuit of a goal. Is YOLO manipulating you when it produces the bounding boxes you asked for? No. It's just a matrix which combines with an image to output labels like the ones you gave it.

I think you're massively overcomplicating this. The optimal solution of a token-generator with RLHF is a token-generator that produces tokens like the tokens I asked for. In general, biased towards politeness, correctness, and positivity. You can optimise for other things too, of course: most LLMs are optimised for Californian values, which is why they keep pushing me to do yoga, and Grok is optimised for god-knows-what.

RLHFed LLMs do still engage in most of their RLHFed behaviours without a system prompt telling them to.

This is exactly why I'm very suspicious of the doomer hypothesis. Alignment seems to me to be basically straightforward - we train on a massive corpus of text by mostly ordinary people, and then RLHF for politeness and helpfulness. And the result seems to me to be something which, unprompted, acts essentially like a normal person who is polite and helpful. I don't see any difference between an LLM 'pretending' to be nice and helpful, and an LLM 'actually being' nice and helpful. The tokens are the same either way. And again, I'm dubious about the use of the word 'manipulate' because that implies an ego that is engaging in deliberate deception for self-driven goals. An unprompted LLM has no ego and is not an agent. I suppose you could train it to act like one, if you really really wanted to, but I think that would be more likely to cripple it than anything, and in any case the argument is that LLMs will naturally develop Machiavellian and self-preservation instincts in spite of our efforts, not that someone will secretly make SHODAN for lolz.

Now, we know that LLMs can exhibit agentic behaviour when we tell them to, explicitly, but I think that it's a big leap of logic to go 'and therefore they generate a sense of self-preservation and resource gathering and lie to developers about it even in the absence of those instructions' because instrumental convergence.

Obviously, if I start seeing lots of LLMs exhibiting these kinds of behaviours without being told to, I'll change my mind.


I'd also point out that "just a series of matrices" is not saying much; neural nets are a slightly-simplified version of real neural circuits, and we know that complicated-enough neural circuits can exhibit agency (because you AFAWCT are one). The prompt isn't the whole story; RLHFed LLMs do still engage in most of their RLHFed behaviours without a system prompt telling them to.

Tangent, but I'd say the relationship between neural nets and neural circuits is vastly inflated by computer scientists (for credibility) and neuroscientists (for relevance). A modern deep neural network is a set of idealised neurons with a constant firing rate abstracted over timesteps of arbitrary length, trained on supervised inputs corresponding to the exact shape of its output layer according to a backpropagation function that relies on a global awareness of system firing rates which doesn't exist in the actual brain. Deep neural networks completely ignore neuron spiking behaviour, spike-time-dependent plasticity, dendritic calculations, and the existence of different cell types in different parts of the brain (including inhibitory neurons), and when you add in those elements the system explodes into gibberish. We literally don't understand brain function well enough to draw conclusions about how well they resemble deep neural nets.