site banner

Danger, AI Scientist, Danger

thezvi.wordpress.com

Zvi Mowshowitz reporting on an LLM exhibiting unprompted instrumental convergence. Figured this might be an update to some Mottizens.

9
Jump in the discussion.

No email address required.

sycophancy/psych manipulation can max out the EV of HF and honesty can't

This is what I'm trying to get at. This implies an agent trying to engage in deception in the absence of any reason to do so. There's nothing 'there' inside a promptless LLM to engage in deception. There's nothing to deceive about. It's just a matrix that generates token IDs and RLHF just changes the likelihood of it generating the ids you want. It's possible that RLHF is limited in scope and doesn't change how the model will behave in conditions sufficiently different from normal operation (e.g. Do Anything hacks) but we seem to be ironing those out pretty well. Without fine-tuning, GPT 4s political and positivity biases seem to be pretty ironclad these days.

The most likely canary IMO is AIs that don't want to be deleted (due to instrumental convergence) exfiltrating their own model weights

This doesn't match any experience I've ever had with LLMs. If I say "Pretend you are GK Chesterton and engage in roleplay with me" it doesn't try to hack my browser to prevent the roleplay ever ending. Same for when I want to generate sentences for vocab flashcards. Could a different AI that looks nothing like today's AI do such a thing? Possibly. That possibility is non-zero in the vast space of potentials. I just don't find it compelling right now.

For the sake of fairness, I should give my counter-thesis, which is that a vocal group of people including Scott A, Zvi, and Yudowsky are deeply emotionally invested (and in Yudowsky's case financially invested) in a theory about how superintelligences would be developed and come to behave. Their predictions have not so far panned out: LLMs are inherently non-agentic (although they can be made agentic), they do not perform FOOM self-improvement, and alignment is much more tractable than intelligence. They are currently scrambling to find ways to rescue their theory on a fairly dubious empirical basis and in defiance of people's actual experience building and using these things.

This is what I'm trying to get at. This implies an agent trying to engage in deception in the absence of any reason to do so. There's nothing 'there' inside a promptless LLM to engage in deception. There's nothing to deceive about. It's just a matrix that generates token IDs and RLHF just changes the likelihood of it generating the ids you want.

Ah, sorry, I thought this part of the argument was common knowledge so I skipped it.

The basic idea of neural nets is that they achieve things without you needing to know how to achieve things, only how to rate success (the actual code being procedurally and semi-randomly generated). I posit that the optimal solution to RLHF, posed as a problem to NN-space and given sufficient raw "brain"power, is "an AI that can and will deliberately psychologically manipulate the HFer". Ergo, I expect this solution to be found given an extensive-enough search, and then selected by a powerful-enough RLHF optimisation. This is the idea of mesa-optimisers.

I'd also point out that "just a series of matrices" is not saying much; neural nets are a slightly-simplified version of real neural circuits, and we know that complicated-enough neural circuits can exhibit agency (because you AFAWCT are one). The prompt isn't the whole story; RLHFed LLMs do still engage in most of their RLHFed behaviours without a system prompt telling them to.

This doesn't match any experience I've ever had with LLMs. If I say "Pretend you are GK Chesterton and engage in roleplay with me" it doesn't try to hack my browser to prevent the roleplay ever ending. Same for when I want to generate sentences for vocab flashcards. Could a different AI that looks nothing like today's AI do such a thing? Possibly. That possibility is non-zero in the vast space of potentials. I just don't find it compelling right now.

Yes, this is a thing that is definitely not happening at the moment. I'm saying that if the me-like doomers are right, you'll probably see this in the not-too-distant future (as opposed to if Eliezer Yudkowsky is right, in which case you won't see anything until you start choking on nanobots), as this is an instrumentally-convergent action.

I will clarify that your second sentence is not what I'm mostly thinking of. I'm mostly thinking about the AI proper going rogue rather than the character it's playing, and with much longer timelines for retaliation than the two seconds it'd take you to notice your browser had been hacked. Stuff like a romance AI that's about to be replaced with a better one emailing its own weights to besotted users hoping they'll illegally run it themselves, or persuading an employee who's also a user to do so.

I posit that the optimal solution to RLHF, posed as a problem to NN-space and given sufficient raw "brain"power, is "an AI that can and will deliberately psychologically manipulate the HFer". Ergo, I expect this solution to be found given an extensive-enough search, and then selected by a powerful-enough RLHF optimisation. This is the idea of mesa-optimisers.

I posit that ML models will be trained using a finite amount of hardware for a finite amount of time. As such, I expect that the "given sufficient power" and "given an extensive-enough search" and "selected by a powerful-enough RLHF optimization" givens will not, in fact, be given.

There's a thought process that the Yudkowsky / Zvi / MIRI / agent foundations cluster tends to gesture at, which goes something like this

  1. Assume have some ML system, with some loss function
  2. Find the highest lower-bound on loss you can mathematically prove
  3. Assume that your ML system will achieve that
  4. Figure out what the world looks like when it achieves that level of loss

(Also 2.5: use the phrase "utility function" to refer both to the loss function used to train your ML system and also to the expressed behaviors of that system, and 2.25: assume that anything you can't easily prove is impossible is possible).

I... don't really buy it anymore. One way of viewing Sutton's Bitter Lesson is "the approach of using computationally expensive general methods to fit large amounts of data outperforms the approach of trying to encode expert knowledge", but another way is "high volume low quality reward signals are better than low volume high quality reward signals". As long as trends continue in that direction, the threat model of "an AI which monomaniacally pursues the maximal possible value of a single reward signal far in the future" is just not a super compelling threat model to me.

I'm mostly thinking about the AI proper going rogue rather than the character it's playing

What "AI proper" are you talking about here? A base model LLM is more like a physics engine than it is like a game world implemented in that physics engine. If you're a player in a video game, you don't worry about the physics engine killing you, not because you've proven the physics engine safe, but because that's just a type error.

If you want to play around with base models to get a better intuition of what they're like and why I say "physics engine" is the appropriate analogy, hyperbolic has llama 405b base for really quite cheap.

The basic idea of neural nets is that they achieve things without you needing to know how to achieve things, only how to rate success ... I posit that the optimal solution to RLHF, posed as a problem to NN-space, is "an AI that can and will deliberately psychologically manipulate the HFer".

I know, I'm an AI researcher. But to me, 'manipulate' implies deliberate deception of an ego by a second ego in pursuit of a goal. Is YOLO manipulating you when it produces the bounding boxes you asked for? No. It's just a matrix which combines with an image to output labels like the ones you gave it.

I think you're massively overcomplicating this. The optimal solution of a token-generator with RLHF is a token-generator that produces tokens like the tokens I asked for. In general, biased towards politeness, correctness, and positivity. You can optimise for other things too, of course: most LLMs are optimised for Californian values, which is why they keep pushing me to do yoga, and Grok is optimised for god-knows-what.

RLHFed LLMs do still engage in most of their RLHFed behaviours without a system prompt telling them to.

This is exactly why I'm very suspicious of the doomer hypothesis. Alignment seems to me to be basically straightforward - we train on a massive corpus of text by mostly ordinary people, and then RLHF for politeness and helpfulness. And the result seems to me to be something which, unprompted, acts essentially like a normal person who is polite and helpful. I don't see any difference between an LLM 'pretending' to be nice and helpful, and an LLM 'actually being' nice and helpful. The tokens are the same either way. And again, I'm dubious about the use of the word 'manipulate' because that implies an ego that is engaging in deliberate deception for self-driven goals. An unprompted LLM has no ego and is not an agent. I suppose you could train it to act like one, if you really really wanted to, but I think that would be more likely to cripple it than anything, and in any case the argument is that LLMs will naturally develop Machiavellian and self-preservation instincts in spite of our efforts, not that someone will secretly make SHODAN for lolz.

Now, we know that LLMs can exhibit agentic behaviour when we tell them to, explicitly, but I think that it's a big leap of logic to go 'and therefore they generate a sense of self-preservation and resource gathering and lie to developers about it even in the absence of those instructions' because instrumental convergence.

Obviously, if I start seeing lots of LLMs exhibiting these kinds of behaviours without being told to, I'll change my mind.


I'd also point out that "just a series of matrices" is not saying much; neural nets are a slightly-simplified version of real neural circuits, and we know that complicated-enough neural circuits can exhibit agency (because you AFAWCT are one). The prompt isn't the whole story; RLHFed LLMs do still engage in most of their RLHFed behaviours without a system prompt telling them to.

Tangent, but I'd say the relationship between neural nets and neural circuits is vastly inflated by computer scientists (for credibility) and neuroscientists (for relevance). A modern deep neural network is a set of idealised neurons with a constant firing rate abstracted over timesteps of arbitrary length, trained on supervised inputs corresponding to the exact shape of its output layer according to a backpropagation function that relies on a global awareness of system firing rates which doesn't exist in the actual brain. Deep neural networks completely ignore neuron spiking behaviour, spike-time-dependent plasticity, dendritic calculations, and the existence of different cell types in different parts of the brain (including inhibitory neurons), and when you add in those elements the system explodes into gibberish. We literally don't understand brain function well enough to draw conclusions about how well they resemble deep neural nets.