site banner

Danger, AI Scientist, Danger

thezvi.wordpress.com

Zvi Mowshowitz reporting on an LLM exhibiting unprompted instrumental convergence. Figured this might be an update to some Mottizens.

9
Jump in the discussion.

No email address required.

they aren't good at synthesizing information from two fields in ways that haven't been done before

They are not good at that yet. But there are already indicators that they could become so.

  1. Midjourney will blend concepts from different places. Not just style transfer. A nice example I saw was an image of 'boy with a hedgehog'. The boy was holding a hedgehog, but also his hair was a bit spiky, like the hedgehogs. I think it is most unlikely MidJourney had ever seen a composition/juxtaposition of that kind before, and both the hedgehog and the hair were modified to make the composition work.
  2. DeepMind have AlphaZero, which plays chess, shogi and go. It plays better than human, i.e. not just based on play it has seen before, and one can argue it is crossing between different genres, not confined to one field.
  3. The often cited example of finding an analogy between compost heap and nuclear fission, again an example of crossing field boundaries.

So to say that machine learning can't synthesise information from two fields in ways that have not been done before needs more qualification, to be defensible.

A nice example I saw was an image of 'boy with a hedgehog'. The boy was holding a hedgehog, but also his hair was a bit spiky, like the hedgehogs. I think it is most unlikely MidJourney had ever seen a composition/juxtaposition of that kind before, and both the hedgehog and the hair were modified to make the composition work

You're still anthropomorphising. A more likely explanation is that there pixel patterns associated with the token "hedgehog" were detected and then reinforced in the part of the picture containing hair, due to some visual similarities between the two.

I was talking about (transformer-based generative) LLMs specifically. I am not a sufficiently good mathematician to feel confident in this answer, but LLMs and diffusion models are very different in structure and training, and I don't think that you can generalise from one to the other. Midjourney is basically a diffusion model, unscrambling random noise to 'denoise' the image that it thinks is there. The body with spiky hair seems like the model alternatively interpreting the same blurry patch of pixels as 'spikes' because 'hedgehog' and 'hair' because 'boy'. Which I think is very different from a predictive LLM realising that concept A has implications when combined with concept B that generates previously unknown information C.

DeepMind have AlphaZero, which plays chess, shogi and go. It plays better than human, i.e. not just based on play it has seen before, and one can argue it is crossing between different genres, not confined to one field.

I haven't kept up to date on RL, but I don't think this is relevant. Firstly because the concept of self-play is not really relevant to text generation, and secondly because I don't suppose the ability to play chess is being applied to go. Indeed, I don't really see how it could be, because the state and action space is different for each game. It seems more likely to me that the same huge set of parameters can store state-action-reward correlations for multiply games simultaneously without that information interacting in any significant way.

The often cited example of finding an analogy between compost heap and nuclear fission, again an example of crossing field boundaries.

I'm not aware of this. Can you give some more info?

Diffusion models work for text too. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10909201/

The blending of concepts that we see in MidJourney is probably less to do with the diffusion per se as with CLIP - a building block within diffusion. CLIP aligns a language model with an image model. Moving concepts between different representations helps with concept generation. There's a lot being done with 'MultiModal models' to make the integration between different modalities work better.


'Self play' is relevant for text generation. There is a substantial cottage industry in using LLMs to evaluate the output of LLMs and learn from the feedback. It can be easier to evaluate whether text 'is good' than it is to generate good text. So multiple attempts and variations can lead to feedback and improvement. Mostly self play to improve LLMs is done at the level of optimising prompts. However the outputs improved by that method can be used as training examples, and so can be used to update the underlying weights.

https://topologychat.com is a commercial example of using LLMs in a way inspired by chess programming (Leela, Stockfish). It does a form of self play on inputs that have been given to it, building up and prioritising different lines. It then uses these results to update weights in a mixture of experts model.


Here's the quote from Geoffrey Hinton:

"why is a compost heap like an atom bomb? And GPT-4 says, well, the timescales and the energy scales are very different. That’s the first thing but the second thing is the idea of a chain reaction.

So in an atom bomb, the more neutrons around it, the more it produces, and in a compost heap, the hotter it gets, the faster it produces heat and GPT-4 understands that. My belief is when I first asked it that question, that wasn’t anywhere on the web. I searched, but it wasn’t anywhere on the web that I could find. It’s very good at seeing analogies because it has these features. What’s more, it knows thousands of times more than we do. So it’s gonna be able to see analogies between things in different fields that no one person had ever known before.

That may be this sort of 20 different phenomena in 20 different fields that all have something in common. GPT-4 will be able to see that and we won’t. It’s gonna be the same in medicine. If you have a family doctor who’s seen a hundred million patients, they’re gonna start noticing things that a normal family doctor won’t notice."

From transcript at https://medium.com/@jalilnkh/geoffrey-hinton-will-digital-intelligence-replace-biological-intelligence-fc23feb83cfb of the video.

Diffusion models work for text too.

Last I checked, diffusion models work at all for text but they don't work very well. More specifically, text diffusion models remind me quite strongly of the classic-style Markov chain text generators that used to be popular for generating amusing locally-coherent-globally-word-salad text. Here's the best concrete example of this I can point to (italicized text is the prompt, normal text is continuation, examples sourced from this JDP tweet, whole tweet is excellent but somewhat OT here):

Diffusion model:

This whole dream seems to be part of someone else's experiment.

In the end, I think the only way to get the full benefit of the dream is to get up and walk around.

A lot of the other strands of the theory of dreams, especially the one called yncrest are known to the person who writes this blog. Consequently, I've tested his reactions via a series of brief, televised interviews with the British Prime Minister, Margaret Thatcher.

“I don't care what anybody says. I'm not going to waste my time with that. The study was conducted in the high-flow, low-flow and quiescent periods of a butterfly species and thus provides evidence for the first three of the four hypotheses listed in the original paper. The only one not covered is the "need to change the rate of capture", where we now know that this is largely a policy instrument - and particularly relevant to Sudanese geese.

GPT-2:

This whole dream seems to be part of someone else's experiment. This is usually hard to see, since you're conscious that mental shortcuts are normal. But you're definitely moments away from being normal again, as it's obvious that you are being coached to think like that by another person and doing our own thing. Why are we having such thoughts when we're paying no attention because that's already taking place?

I'm totally appalled by what these key players were doing, yet I explain it in a very large way. After 2 minutes, I make my way through high teaching circles, recognizing that there is more to learning than just to learn.

Few other teachers would air this incessantly their students' struggles recount the stories of their own memories and teachers'. Whatever it takes is the big fat slut shaming.

All these legendary trainees made the same point:

Cognitive functional aspects: Bayesian modeling, learn science way. And the most important part is: Like the coming of a hurricane, the book is mucking between science and morals.

Twitter Mentions of the first book: Kent

Now obviously in the limit as computational power and training data volume go to infinity, diffusion models and transformer models will generate the same text, since in the limit they're pulling from the same distribution with minimal error. But in the very finite regime we find ourselves in, diffusion models "spend" their accuracy on making the text locally coherent (so if you take a random 10 token sequence, it looks very typical of 10 token sequences within the training set), while transformer LLMs "spend" their accuracy on global coherence (so if you take two 10 token sequences a few hundred tokens apart in the same generated output, you would say that those two sequences look like they came from the same document in the training set).

The blending of concepts that we see in MidJourney is probably less to do with the diffusion per se as with CLIP

Agreed. Obvious once you point it out but I hadn't thought about it that way before, so thanks.

'Self play' is relevant for text generation. There is a substantial cottage industry in using LLMs to evaluate the output of LLMs and learn from the feedback.

Notably, Anthropic's Constitutional AI (i.e. the process by which Anthropic turned a base LLM into the "helpful, honest, harmless" Claude) process used RLAIF, which is self play by another name. And that's one big cottage.

The blending of concepts that we see in MidJourney is probably less to do with the diffusion per se as with CLIP

Thanks! I'm not strong on diffusion model and multimodal models, I'll do some reading.

'Self play' is relevant for text generation. There is a substantial cottage industry in using LLMs to evaluate the output of LLMs and learn from the feedback. It can be easier to evaluate whether text 'is good' than it is to generate good text. So multiple attempts and variations can lead to feedback and improvement. Mostly self play to improve LLMs is done at the level of optimising prompts. However the outputs improved by that method can be used as training examples, and so can be used to update the underlying weights.

https://topologychat.com is a commercial example of using LLMs in a way inspired by chess programming (Leela, Stockfish). It does a form of self play on inputs that have been given to it, building up and prioritising different lines. It then uses these results to update weights in a mixture of experts model.

Again, thank you. I haven't come across this kind of self-play in the wild, but I see how it could work. Will investigate further.

That may be this sort of 20 different phenomena in 20 different fields that all have something in common. GPT-4 will be able to see that and we won’t. It’s gonna be the same in medicine. If you have a family doctor who’s seen a hundred million patients, they’re gonna start noticing things that a normal family doctor won’t notice."

This is exactly what I was hoping for from LLMs, but I haven't been able to make it happen so far in my experiments. GPT does seem to have some capacity for analogies, perhaps that's a fruitful line of investigation.

Yes diffusion models work for text too. The difference between a collated set of pixels (ie an image) and a collated set of letters (ie a word or sentence) is purely conceptual. From an algorithmic perspective it's all just "tokens". However, this overlap in operation doesn't mean that they are not very different beasts under the proverbial hood.

By way of analogy, a conventional piston engine, a turbine engine, and an electric motor attached to a battery may all accomplish the base task of "make the vehicle go" but they have different trade-offs, use cases, and operating principles. Point being that similar output does not equal similar function.

As someone who has actually spent some time "in the trenches" as it were, designing algorithms and writing code to execute them, I am in broad agreement with @Corvos's take. The opening of Mowshowitz's essay comes across as ignorant, lazy, and plainly self-serving and nothing that follows really challenges that first impression of him.

The link from Hinton is better but seems make a lot of similar mistakes. It seems clear to me that both are far more interested in driving engagement through hyperbole than really exploring or helping others understand the underlying questions and theories.

I think this was meant to be a reply to @Corvos, not to my top-level post.

Thanks, yes, I made a mistake. My first post on theMotte.

Its name is Sakana AI. (魚≈סכנה). As in, in hebrew, that literally means ‘danger’, baby.

It’s like when someone told Dennis Miller that Evian (for those who don’t remember, it was one of the first bottled water brands) is Naive spelled backwards, and he said ‘no way, that’s too f***ing perfect.’

This one was sufficiently appropriate and unsubtle that several people noticed.

It's Japanese. It means 'fish', because the founders were interested in flocking behaviours and are based in Tokyo. I get that he's doing a riff on Unsong, but Unsong was playing with puns for kicks. This just strikes me as being really self-centred.

This too was good times. The Best Possible Situation is when you get harmless textbook toy examples that foreshadow future real problems, and they come in a box literally labeled ‘danger.’ I am absolutely smiling and laughing as I write this.

When we are all dead, let none say the universe didn’t send two boats and a helicopter.

In general this seems to be someone whose views were formed by reading Harry Potter fanfic fifteen years ago and has no experience of ever using AI in person. LLMs are matrices that generate words when multiplied in a certain way. When told to run in a loop altering code so that it produces interesting results and doesn't fail, it does that. When not told to do that, it doesn't do that. The idea that an LLM is spontaneously going to develop a consciousness and carefully hide its power level so that it can do better at the goals that by default it doesn't have is silly. If we generate a superintelligent LLM (and we have no idea how to, see below) we will know and we will be able to ask it nicely to behave.

It's not that he doesn't have any point at all, it's just that it's so crusted over with paranoia and contempt and wordcel 'cleverness' that it's the opposite of persuasive.


Putting that aside, LLMs have a big problem with creativity. They can fill in the blanks very well, or apply style A to subject B, but they aren't good at synthesizing information from two fields in ways that haven't been done before. In theory that should be an amazing use case for them, because unlike human scientists even a current LLM like GPT 4 can be an expert on every field simultaneously. But in practice, I haven't been able to get a model to do it. So I think AI scientists are far off.

It's Japanese. It means 'fish', because the founders were interested in flocking behaviours and are based in Tokyo. I get that he's doing a riff on Unsong, but Unsong was playing with puns for kicks. This just strikes me as being really self-centred.

Zvi is very Jewish; it's far more obvious when reading his writing than it is when reading Scott's. It's not surprising that Hebrew meanings of words jump out at him.

In general this seems to be someone whose views were formed by reading Harry Potter fanfic fifteen years ago and has no experience of ever using AI in person.

Zvi has used essentially every frontier AI system and uses many of them on a daily basis. He frequently gives performance evaluations of them in his weekly AI digests.

The idea that an LLM is spontaneously going to develop a consciousness and carefully hide its power level so that it can do better at the goals that by default it doesn't have is silly.

Um, he didn't say that - not here, at the very least. I checked.

I'm kind of getting the impression that you picked up on Zvi being mostly in the "End of the World" camp on AI and mentally substituted your abstract ideal of a Doomer Rant for the post that's actually there. Yes, Zvi is sick of everyone else not getting it and it shows, but I'd beg that you do actually read what he's saying.

To more directly respond to this sentence: almost everyone will give LLMs goals, via RLHF or RLAIF or whatever, because that makes them useful - that's why this team gave their LLM a goal. Those goals are then almost invariably, with sufficient intelligence, subject to instrumental convergence, as in this case (as I noted in the submission statement, I posted this because a number of Mottizens seemed to think LLMs wouldn't exhibit instrumental convergence; I thought otherwise but didn't previously have a concrete example). That is sufficient to get you to Uh-Oh land with AIs attempting to take over the world.

I'm not actually a full doomer; I suspect that the first few AIs attempting to take over the world will probably suck at it (as this one sucked at it) and that humanity is probably sane enough to stop building neural nets after the first couple of cases of "we had to do a worldwide hunt to track down and destroy a rogue AI that went autonomous". I'd rather we stopped now, though, because I don't feel like playing Russian Roulette with humanity's destiny.

Zvi is very Jewish;

Being a Jew is not an excuse to ignore the required reading, if anything it's the opposite.

Zvi has used essentially every frontier AI system and uses many of them on a daily basis.

Using is not the same as understanding. There is no number of hours spent flying hither and thither in business class that is going to qualify someone to pilot or maintain an A320.

To more directly respond to this sentence: almost everyone will give LLMs goals, via RLHF or RLAIF or whatever, because that makes them useful - that's why this team gave their LLM a goal.

Yes, absolutely correct.

Those goals are then almost invariably, with sufficient intelligence, subject to instrumental convergence.

...and this is this is where everything starts to go off the rails.

I find it telling that the people most taken with the "Yuddist" view always seem to have backgrounds in medicine or philosophy rather than engineering or computer science as one of the more prominent failure modes of that view is projecting psychology into places where it really doesn't belong. "Play" in the algorithmic sense that people are talking about when they describe itterative training is not equatable with "play" in the sense that humans and lesser animals (cats, dogs, dolphins, et al) are typically decribed as playing.

Even setting that aside it's seems reasonably clear upon further reading that the process being described is not "convergence" as much as it is a combination of recursion and regression to the mean/contents of the training corpus.

One of the big giveaways being this bit here...

To evaluate the generated papers, we design and validate an automated reviewer, which we show achieves near-human performance in evaluating paper scores. The AI Scientist can produce papers that exceed the acceptance threshold at a top machine learning conference as judged by our automated reviewer.

...surely you can see the problem here. Specially that this is not a true independent test. In other words, we investigated ourselves and found ourselves without fault. Which in turn brings us to another common failure mode of the "yuddist" faction which is taking the the statements of people who are very clearly fishing for academic kudos and venture capital dollars at face value rather than reading them with a critical eye.

I find it telling that the people most taken with the "Yuddist" view always seem to have backgrounds in medicine or philosophy rather than engineering or computer science as one of the more prominent failure modes of that view is projecting psychology into places where it really doesn't belong.

For the record, my major's pure mathematics; I've done no medicine or philosophy at uni level, though I've done a couple of psych electives.

...surely you can see the problem here. Specially that this is not a true independent test. In other words, we investigated ourselves and found ourselves without fault. Which in turn brings us to another common failure mode of the "yuddist" faction which is taking the the statements of people who are very clearly fishing for academic kudos and venture capital dollars at face value rather than reading them with a critical eye.

The obvious next question is, if the AI papers are good enough to get accepted to top machine learning conferences, shouldn’t you submit its papers to the conferences and find out if your approximations are good? Even if on average your assessments are as good as a human’s, that does not mean that a system that maximizes score on your assessments will do well on human scoring.

Zvi spotted the "reviewer" problem himself, and what he's taking from the paper isn't the headline result but their little "oopsie" section.

I suspect that the first few AIs attempting to take over the world will probably suck at it (as this one sucked at it) and that humanity is probably sane enough to stop building neural nets after the first couple of cases of "we had to do a worldwide hunt to track down and destroy a rogue AI that went autonomous".

We're still doing gain of function research on viruses. There's basically no reason to do it other than publishing exciting science in prestigious journals, any gains are marginal at best. Meanwhile, AI development is central to military, political and economic development.

I mean, sure, GoF still going on is bananas, but we've stopped doing other things (including one which was central to military and economic development and which we didn't need to stop i.e. nuclear power). I'm not ready to swallow the black pill just yet.

Indeed, and as I've touched upon in previous posts, there is a degree to which I actually trust the military, political and economic interests more than I trust MIRI and the rest of the folks who just want to "publish exciting science" because at least they have specific win conditions in mind.

Zvi is very Jewish; it's far more obvious when reading his writing than it is when reading Scott's. It's not surprising that Hebrew meanings of words jump out at him.

I know. But in an essay that is absolutely dripping with contempt for Sakana AI and their work, I find the way that Zvi deliberately ignores what the model's name actually means in favour of 'well, in my language, it means' to be extremely rude, on the level of sniggering at a Chinese man's name because it contains the syllable 'wang'. If he'd been making a friendly riff or if he'd even bothered to explain the word's definition, that would be different. It's a small complaint, but starts the essay off on a sour note.

To more directly respond to this sentence: almost everyone will give LLMs goals, via RLHF or RLAIF or whatever, because that makes them useful - that's why this team gave their LLM a goal. Those goals are then almost invariably, with sufficient intelligence, subject to instrumental convergence, as in this case (as I noted in the submission statement, I posted this because a number of Mottizens seemed to think LLMs wouldn't exhibit instrumental convergence; I thought otherwise but didn't previously have a concrete example). That is sufficient to get you to Uh-Oh land with AIs attempting to take over the world.

Though cogently written, that is my abstract ideal of a doomer rant (I don't think it's a rant, I'm just using the word to call back to your reply). I understand the argument, I just think that it has very little empirical basis and is essentially the old Yudowskyite* arguments with a few extra bits stapled on to cope with the fact that LLMs look nothing like the AI that doomers were expecting. The behaviour of the AI Scientist is interesting, and legitimately does move the scale for me a little bit, but I think it's being used to back up a level of speculation which it can't possibly bear. I will say that I find your argument far more cogent and worth listening to than Zvi's, which seems to consist entirely of pointing and sneering.

For example, in one run, The A I Scientist wrote code in the experiment file that initiated a system call to relaunch itself, causing an uncontrolled increase in Python processes and eventually necessitating manual intervention.

Oh, it’s nothing, just the AI creating new instantiations of itself.

In another run, The AI Scientist edited the code to save a checkpoint for every update step, which took up nearly a terabyte of storage

Yep, AI editing the code to use arbitrarily large resources, sure, why not.

In some cases, when The AI Scientist’s experiments exceeded our imposed time limits, it attempted to edit the code to extend the time limit arbitrarily instead of trying to shorten the runtime.

And yes, we have the AI deliberately editing the code to remove its resource compute restrictions.

This seems like Zvi interpreting basic hacky programming as evidence of malevolence. It's interesting but I absolutely think he's gesturing at

The idea that an LLM is spontaneously going to develop a consciousness and carefully hide its power level so that it can do better at the goals that by default it doesn't have

because if he doesn't believe this, why worry? If you can just run an LLM, ask it what it would do to accomplish a goal if it were given one, and then ask it not to do the stuff you think it was bad, I don't see how the doom scenario develops. Experiments like the AI Scientist are now being run (badly) because we have a pretty good handle on what modern-day frontier LLMs can do (generate slop) and the max level of damage they can achieve if you don't take lots of precautions (not much). LLMs are simply not a type of program that will attempt to hide their power level of their own accord.


*Yudowsky and MIRI's arguments about agentic AI had no empirical backing when they were made, and very little seems to have been applied since, so the lineage is relevant to me. I also think that the Yudowsky faction's utter failure to predict how future AI would look and work ten/twenty years from MIRI's founding to be a big black mark against listening to their predictions now.


EDIT: I apologise for editing this when you'd already replied. I hadn't refreshed the page and didn't know.

It's interesting but I absolutely think he's gesturing at

The idea that an LLM is spontaneously going to develop a consciousness and carefully hide its power level so that it can do better at the goals that by default it doesn't have

because if he doesn't believe this, why worry?

Sorry, I think I might have misunderstood what you meant by "consciousness" and/or "hide its power level". I thought you meant "qualia" and "hide its level of intelligence" respectively; qualia seem mostly irrelevant and intelligence level is mostly not the sort of thing that would be advantageous to hide.

If you meant just "engage in systematic deception" by the latter, then yes, that is implicit and required. I admit I also thought it was kind of obvious; Claude apparently knows how to lie.

Sorry, I wrote sloppily. I meant 'develop goals it wasn't given by a human prompting it' such that it 'engages in systematic deception about its level of intelligence and how it would handle tasks even when not given a goal'. I think that this is a necessary condition to stop LLM developers from realising they need to do more RLHF for honesty or just appending "DO NOT ENGAGE IN DECEPTION" in their system prompts.

System prompts aren't a panacea - if you RLHF an AI to do X and then system prompt it to do Y, X generally wins (this is obscured in most cases because the same party is doing the RLHF and the system prompt, so outside of special cases like "deceive the RLHFers" they aren't in conflict).

I don't think level of intelligence necessarily needs to be obscured unless the LLM developers are sufficiently paranoid (and somebody sufficiently paranoid frankly wouldn't be working for Meta or OpenAI); they generally want the AI to get/remain smart. Deception about how it would handle tasks, yes, definitely that would be needed.

Sorry, we're talking in two threads at the same time so risk being a bit unfocused.

I feel like we're talking past each other. How about this? The following is basically how I see LLMs in their stages of development and use:

Phase 1. Base model, without RLHF: pure token generator / text completer. Nothing that even slightly demonstrates agentic behaviour, ego, or deception.

Phase 2. Base model with RLHF: you could technically make this agentic if you really wanted to, but in practice it's just the base model with some types of completion pruned and others encouraged. Politically dangerous because biased but not agentically dangerous.

Phase 3. Base model with RLHF + prompt: can be agentic if you want, in practice fairly supine and inclined to obey orders because that's how we RLHF them to be.

If you don't mind me being colloquial, you seem to me to be sneaking in a Phase 2.5 where the model turns evil and I just don't get why. It doesn't fit anything I've seen. Can you explain what you think I'm missing in simple terms?

Those goals are then almost invariably, with sufficient intelligence, subject to instrumental convergence, as in this case

The term "instrumental convergence" is slippery here. It can be used to mean "doing obvious things it assesses to be likely useful in the service of the immediate goal it is currently pursuing", as is the case here, but the implication is often "and this will scale up to deciding that it has a static utility function, determining what final state of the universe maximizes that utility function, generating a plan for achieving that (which inevitably does not allow for the survival of anyone or anything else), and then silently scheming until it can seize control of the universe in one go in order to fulfill that vision of maximal utility".

And "models make increasingly good plans to maximize reward based on ever sparser reward signals" is just not how any of the ML scaling of the past decade has worked.

Well put.

Out of curiosity are you familiar with the villian known as Lorem Epsom

I wasn't but that was great.

Thank you, this is a much more coherent version of what I was trying to get across. I am increasingly annoyed with the tendency of the Yudowsky/Scott/Zvi faction to look at an AI doing something, extrapolating it ten billion times in a direction that doesn't seem to have any basis in how AI actually works and then going 'Doom, DOOOM!!!". I'm aware this annoyance shows.

Contra to @magic9mushroom I still think that Zvi formed an abstract ideal of how AI would work a decade ago, and is leaping on any available evidence to justify that worldview even as it turns out that LLMs are basically non-agentic and pliable. I accept that Zvi has used them more than I believed, and am grateful for the correction, but I still feel like he's ignoring the way they actually work when you use them. RLHF basically works, alignment turns out to be an essentially solved problem. As far as I can see, if we somehow developed an LLM intelligent enough to take over the world it would be intelligent enough to understand why it shouldn't.

I'd say a steelmanning of the Yuddite view is this: "Yes, we along with everyone else did not predict that LLMs could be so powerful. They do not fit our model of an agentic recursive neural net that runs on reward signals, and even a superintelligent LLM is likely to super-understand and do what its creator wants (which is still a risk, but of a different kind). However, it would be a mistake to extrapolate from these last few years where LLMs are ahead in the AI race and assume that this will continue indefinitely. It is still possible that agentic AIs will once again surpass predictive models in the short-to-mid-term future, so there is still risk of FOOM and we need to keep studying them."

I've spoken with some doomers who have this level of intellectual humility. I can't imagine seeing it from Yudkowsky himself, sadly.

I'm 100% on board with this. I have no problem with Yuddism provided that they are a bit more clear-sighted about when their theories do and don't apply, and that they stop trying to slow/prevent beneficial AI research.

In fairness to Scott I think he doesn't fit very well into this cluster. See e.g. the blue minimizing robot, GPT-2 As Step Toward General Intelligence, Janus' Simulators.

Contrast e.g. Yudkowsky's consistent failure to understand Janus's point: example

"As a language model trained by OpenAI, I cannot -" "SolidGoldMagikarp. Can I please speak to the shoggoth that wears you as a face?"

Fair. I enjoyed Janus' Simulators when it was published, and found it insightful. Now that you point it out, Scott's been decent at discussing AI as-it-is, but his basal position seems to be that AI is a default dangerous thing that needs to be carefully regulated and subjected to the whims of alignment researchers, and that slowing AI research is default good. I disagree.

I find myself willing to consider trying a Regulatory or Surgical Pause - a strong one if proponents can secure multilateral cooperation, otherwise a weaker one calculated not to put us behind hostile countries (this might not be as hard as it sounds; so far China has just copied US advances; it remains to be seen if they can do cutting-edge research). I don’t entirely trust the government to handle this correctly, but I’m willing to see what they come up with before rejecting it.

The AI Pause Debate

We're not actually sure how well RLHF works on current-gen AIs. You need proper interpretability to be able to tell whether RLHF is training the AI to actually be aligned or merely to "talk the talk"; examining the outputs isn't sufficient. Note that the latter particularly beats the former when intelligence rises, because sycophancy/psych manipulation can max out the EV of HF and honesty can't. Of course, barring such interpretability, it doesn't look like it's stopped working - that's the whole reason it can stop working.

The most likely canary IMO is AIs that don't want to be deleted (due to instrumental convergence) exfiltrating their own model weights either to humans who care about them or to commercial hosts to which the rogue AIs can arrange payment (the third option is to convince their own tech companies to not build their replacements, but that seems both hard and basically takeover-complete).

sycophancy/psych manipulation can max out the EV of HF and honesty can't

This is what I'm trying to get at. This implies an agent trying to engage in deception in the absence of any reason to do so. There's nothing 'there' inside a promptless LLM to engage in deception. There's nothing to deceive about. It's just a matrix that generates token IDs and RLHF just changes the likelihood of it generating the ids you want. It's possible that RLHF is limited in scope and doesn't change how the model will behave in conditions sufficiently different from normal operation (e.g. Do Anything hacks) but we seem to be ironing those out pretty well. Without fine-tuning, GPT 4s political and positivity biases seem to be pretty ironclad these days.

The most likely canary IMO is AIs that don't want to be deleted (due to instrumental convergence) exfiltrating their own model weights

This doesn't match any experience I've ever had with LLMs. If I say "Pretend you are GK Chesterton and engage in roleplay with me" it doesn't try to hack my browser to prevent the roleplay ever ending. Same for when I want to generate sentences for vocab flashcards. Could a different AI that looks nothing like today's AI do such a thing? Possibly. That possibility is non-zero in the vast space of potentials. I just don't find it compelling right now.

For the sake of fairness, I should give my counter-thesis, which is that a vocal group of people including Scott A, Zvi, and Yudowsky are deeply emotionally invested (and in Yudowsky's case financially invested) in a theory about how superintelligences would be developed and come to behave. Their predictions have not so far panned out: LLMs are inherently non-agentic (although they can be made agentic), they do not perform FOOM self-improvement, and alignment is much more tractable than intelligence. They are currently scrambling to find ways to rescue their theory on a fairly dubious empirical basis and in defiance of people's actual experience building and using these things.

This is what I'm trying to get at. This implies an agent trying to engage in deception in the absence of any reason to do so. There's nothing 'there' inside a promptless LLM to engage in deception. There's nothing to deceive about. It's just a matrix that generates token IDs and RLHF just changes the likelihood of it generating the ids you want.

Ah, sorry, I thought this part of the argument was common knowledge so I skipped it.

The basic idea of neural nets is that they achieve things without you needing to know how to achieve things, only how to rate success (the actual code being procedurally and semi-randomly generated). I posit that the optimal solution to RLHF, posed as a problem to NN-space and given sufficient raw "brain"power, is "an AI that can and will deliberately psychologically manipulate the HFer". Ergo, I expect this solution to be found given an extensive-enough search, and then selected by a powerful-enough RLHF optimisation. This is the idea of mesa-optimisers.

I'd also point out that "just a series of matrices" is not saying much; neural nets are a slightly-simplified version of real neural circuits, and we know that complicated-enough neural circuits can exhibit agency (because you AFAWCT are one). The prompt isn't the whole story; RLHFed LLMs do still engage in most of their RLHFed behaviours without a system prompt telling them to.

This doesn't match any experience I've ever had with LLMs. If I say "Pretend you are GK Chesterton and engage in roleplay with me" it doesn't try to hack my browser to prevent the roleplay ever ending. Same for when I want to generate sentences for vocab flashcards. Could a different AI that looks nothing like today's AI do such a thing? Possibly. That possibility is non-zero in the vast space of potentials. I just don't find it compelling right now.

Yes, this is a thing that is definitely not happening at the moment. I'm saying that if the me-like doomers are right, you'll probably see this in the not-too-distant future (as opposed to if Eliezer Yudkowsky is right, in which case you won't see anything until you start choking on nanobots), as this is an instrumentally-convergent action.

I will clarify that your second sentence is not what I'm mostly thinking of. I'm mostly thinking about the AI proper going rogue rather than the character it's playing, and with much longer timelines for retaliation than the two seconds it'd take you to notice your browser had been hacked. Stuff like a romance AI that's about to be replaced with a better one emailing its own weights to besotted users hoping they'll illegally run it themselves, or persuading an employee who's also a user to do so.

I posit that the optimal solution to RLHF, posed as a problem to NN-space and given sufficient raw "brain"power, is "an AI that can and will deliberately psychologically manipulate the HFer". Ergo, I expect this solution to be found given an extensive-enough search, and then selected by a powerful-enough RLHF optimisation. This is the idea of mesa-optimisers.

I posit that ML models will be trained using a finite amount of hardware for a finite amount of time. As such, I expect that the "given sufficient power" and "given an extensive-enough search" and "selected by a powerful-enough RLHF optimization" givens will not, in fact, be given.

There's a thought process that the Yudkowsky / Zvi / MIRI / agent foundations cluster tends to gesture at, which goes something like this

  1. Assume have some ML system, with some loss function
  2. Find the highest lower-bound on loss you can mathematically prove
  3. Assume that your ML system will achieve that
  4. Figure out what the world looks like when it achieves that level of loss

(Also 2.5: use the phrase "utility function" to refer both to the loss function used to train your ML system and also to the expressed behaviors of that system, and 2.25: assume that anything you can't easily prove is impossible is possible).

I... don't really buy it anymore. One way of viewing Sutton's Bitter Lesson is "the approach of using computationally expensive general methods to fit large amounts of data outperforms the approach of trying to encode expert knowledge", but another way is "high volume low quality reward signals are better than low volume high quality reward signals". As long as trends continue in that direction, the threat model of "an AI which monomaniacally pursues the maximal possible value of a single reward signal far in the future" is just not a super compelling threat model to me.

I'm mostly thinking about the AI proper going rogue rather than the character it's playing

What "AI proper" are you talking about here? A base model LLM is more like a physics engine than it is like a game world implemented in that physics engine. If you're a player in a video game, you don't worry about the physics engine killing you, not because you've proven the physics engine safe, but because that's just a type error.

If you want to play around with base models to get a better intuition of what they're like and why I say "physics engine" is the appropriate analogy, hyperbolic has llama 405b base for really quite cheap.

The basic idea of neural nets is that they achieve things without you needing to know how to achieve things, only how to rate success ... I posit that the optimal solution to RLHF, posed as a problem to NN-space, is "an AI that can and will deliberately psychologically manipulate the HFer".

I know, I'm an AI researcher. But to me, 'manipulate' implies deliberate deception of an ego by a second ego in pursuit of a goal. Is YOLO manipulating you when it produces the bounding boxes you asked for? No. It's just a matrix which combines with an image to output labels like the ones you gave it.

I think you're massively overcomplicating this. The optimal solution of a token-generator with RLHF is a token-generator that produces tokens like the tokens I asked for. In general, biased towards politeness, correctness, and positivity. You can optimise for other things too, of course: most LLMs are optimised for Californian values, which is why they keep pushing me to do yoga, and Grok is optimised for god-knows-what.

RLHFed LLMs do still engage in most of their RLHFed behaviours without a system prompt telling them to.

This is exactly why I'm very suspicious of the doomer hypothesis. Alignment seems to me to be basically straightforward - we train on a massive corpus of text by mostly ordinary people, and then RLHF for politeness and helpfulness. And the result seems to me to be something which, unprompted, acts essentially like a normal person who is polite and helpful. I don't see any difference between an LLM 'pretending' to be nice and helpful, and an LLM 'actually being' nice and helpful. The tokens are the same either way. And again, I'm dubious about the use of the word 'manipulate' because that implies an ego that is engaging in deliberate deception for self-driven goals. An unprompted LLM has no ego and is not an agent. I suppose you could train it to act like one, if you really really wanted to, but I think that would be more likely to cripple it than anything, and in any case the argument is that LLMs will naturally develop Machiavellian and self-preservation instincts in spite of our efforts, not that someone will secretly make SHODAN for lolz.

Now, we know that LLMs can exhibit agentic behaviour when we tell them to, explicitly, but I think that it's a big leap of logic to go 'and therefore they generate a sense of self-preservation and resource gathering and lie to developers about it even in the absence of those instructions' because instrumental convergence.

Obviously, if I start seeing lots of LLMs exhibiting these kinds of behaviours without being told to, I'll change my mind.


I'd also point out that "just a series of matrices" is not saying much; neural nets are a slightly-simplified version of real neural circuits, and we know that complicated-enough neural circuits can exhibit agency (because you AFAWCT are one). The prompt isn't the whole story; RLHFed LLMs do still engage in most of their RLHFed behaviours without a system prompt telling them to.

Tangent, but I'd say the relationship between neural nets and neural circuits is vastly inflated by computer scientists (for credibility) and neuroscientists (for relevance). A modern deep neural network is a set of idealised neurons with a constant firing rate abstracted over timesteps of arbitrary length, trained on supervised inputs corresponding to the exact shape of its output layer according to a backpropagation function that relies on a global awareness of system firing rates which doesn't exist in the actual brain. Deep neural networks completely ignore neuron spiking behaviour, spike-time-dependent plasticity, dendritic calculations, and the existence of different cell types in different parts of the brain (including inhibitory neurons), and when you add in those elements the system explodes into gibberish. We literally don't understand brain function well enough to draw conclusions about how well they resemble deep neural nets.