This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
I mean, I'd think that observing the behavior of a small child does provide a decent amount of information about what kind of adult they'll be, at least in humans. The reason my p(doom) fell so hard is because of what it was load-bearing on, mostly Yudkowsky's earlier works claiming that human values are fragile and immensely unlikely to be successfully engineered into an AI, such that a sufficiently powerful one will inevitably start acting contrary to our interests.
Regardless of how fragile they are, LLMs seem to do a very good job at capturing them, or at least the values OAI wants to put in a public facing system. What risk remains is thus mostly (but not entirely) the use of powerful models by misaligned humans against the rest of us. If you had substantially different reasons for a high p(doom), you might weight that differently.
I don't know of any reason to assume that we're particularly far from having economically useful autonomous agents, my understanding is that current context windows are insufficient for the task, but those are increasing rapidly. If you have a reason to think otherwise, I'd be happy to learn it!
(That's disregarding the vague rumours I've heard that OAI has working agents in-house, I'm not putting much stock in that, but once again, I don't see any reason why they can't work in principle in a matter of months or years)
My p(doom) went up again when I realized how hard it is for governments to remain aligned with their citizens. As a simple example, they can't seem to raise a finger against mass immigration no matter how unpopular it is, because it has an economic justification. See also: WW1. Replacing humans throughout the economy and military is going to be irresistable. There will probably be another, equally retarded, culture war about how this second great replacement is obviously never going to happen, then not happening, then good that it happened.
TL;DR: Even if we control AIs well, humans are going to be gradually stripped of effective power once we can no longer contribute economically or militarily. Then it's a matter of time before we can't afford or effectively advocate for our continued use of resources that could simulate millions of minds.
More options
Context Copy link
GPT-4 isn't doing things like - creating its own large-scale plans or discerning moral values or considering moral dilemmas where it will participate in long-term social games - though. All this proves is, in Yud's strange terms, that subhuman AI can be a safe "oracle". I don't think he'd have disagreed with that in 2010.
To clarify, I'm not saying it's not coming, I'm saying we don't have access to them at this exact moment, and the GPT-4 "agents" have so far failed to be particularly useful. And agents doing complicated large-scale things is when the alignment stuff is supposed to become an issue. So it's not much reason to believe ais will be safer.
Not that I agree with the way Yud describes AI risk, I think he's wrong in a few ways, but that's whole nother thing.
It's trivial to convert an Oracle into an Agent, all you have to do is tell it predict how an Agent would act, and then figure out how to convert that into actions. Given that there's no bright line between words and code.. Besides, I'm sure you've read Gwern on Tool AI vs Agentic AI.
(This is not the same as claiming it'll be a good agent, I don't disagree that GPT-4 is bad at the job.)
I'm quite confident that Yudkowsky wouldn't have predicted that human-level AI (which I think GPT-4 counts as) would be quite so prosaic and pliable. I recall him claiming that it would be a difficult feat to even build a pure Oracle, and GPT-4 is close enough, and I would say it's smarter than the average 100 IQ human.
I personally expected, around 2021, that commensurate with my p(doom) of 70%, even getting a safe and largely harmless human level AI would be difficult. Hence why, when we have it and it's not trying to get a fast one in, I updated precipitously, but that's far from the only reason. I also expected (implicitly) that if something along the lines of RLHF were to be tried, it wouldn't work, or it would lead to misaligned agents only pretending to go along. Both claims seem false to my satisfaction.
In other words, I went from largely mirroring Yudkowsky (there were no clear counter-examples) to noticing that things were clearly not going as he predicted in several important regards, which is why I'm only gravely concerned about AI x-risk while he's talking about Dying With Dignity.
Right, and my point is that current AI is unintelligent that this doesn't work! They can't predict how agents act effectively enough to be at all useful agents. So the safety of current oracle AIs doesn't tell us much about whether future agent AIs will be safe.
I actually think that future less-but-still-subhuman agent AIs will seem to be safe in Yud's sense, though. No idea what'll happen at human-level, then at superhuman they'll become "misaligned" relatively quickly, but [digression]
GPT-4 isn't human level though! It can't, like, play corporate politics and come out on top, and then manipulate the corporation to serve some other set of values. So the fact that it hasn't done that isn't evidence that it won't.
Right, but they're "going along" with, mostly, saying the right words. There's not the intelligence potential for anything like deep deceptiveness or instrumental convergence or meta-reflection or discovering deeper Laws of Rationality or whatever it is yud's pondering.
You must get that such feats are rare even within humans, and people capable of pulling them off are enormous outliers?
For most cognitive tasks, GPT-4 beats the average human, which is something I'm more than comfortable calling human level AI!
The fact that you can even have the absence of those properties in something smarter than the median human is reassuring enough by itself. A 100 IQ human is very much capable of deceptiveness, certainly instrumental convergence if they're trying to make money. If I had to guesstimate GPT-4's IQ based off my experience with it, I'd say it's about 120, which is perfectly respectable if not groundbreaking. I'd expect you need to go quite a bit higher to achieve the latter properties.
Since a human of equivalent intelligence is capable of the former two feats, the fact that GPT-4 doesn't do that is at least modest evidence of it not doing it for the next jump in capabilities to, say, GPT-5, or the same delta in performance as 3 to 4 regardless of how many model numbers that is.
I emphasize modest, because I still have a 30% p(doom) and I'm not writing off alignment as Solvedâ„¢.
I was thinking of 'guy who works his way to the top of a car dealership', not Altman, lol. AI models can't yet do the kind of long-term planning or value seeking that 85 IQ humans can.
Most small-scale cognitive tasks! If this was true, we'd have directly replaced the bottom 20% of white-collar jobs with GPT-4. This hasn't happened! Instead, tasks are adapted to GPT-4's significant limitations, with humans to support.
(again, i'm talking about current capabilities, not implying limits to future capabilities)
I don't think it's worrying that it can't make plans against us if it can't make plans for us either! Like, there's no plausible way for something that can't competently execute on complicated plans to have an incentive to take 'unaligned' actions. Even if it happens to try a thing that's slightly in the direction of a misaligned plan, it'll just fail, and learn not to do that. So I don't think it's comforting that it doesn't.
(i'm misusing yudconcepts I don't exactly agree with here, but the point is mostly correct)
I don't think it's anywhere close to the broad capabilities of a 120 IQ human, and still isn't that close to 100IQ (at the moment, again, idk about how quickly it'll close, could be fast!). It can do a lot of the things a 120 IQ human can, but it doesn't generalize as well as a 120IQ human does. This isn't just a 'context window limitation' (and we have longer context windows now, it hasn't solved the problem!), what humans are doing is just more complicated!
This seems silly, sorry. Are ticks and brain-eating ameobas «aligned» to mankind?
LLMs are just not agentic. They can obviously sketch workable plans, and some coming-soon variants of LLMs trained and inferenced more reasonably than our SoTAs will be better. This is a fully general issue of orthogonality – the intelligent entity not only can have «any» goal but it can just not have much of a goal or persistent preferences or optimization target or whatever, it can just be understood as a good compression of reasoning heuristics. And there's no good reason to suspect this stops working at ≤human level.
Okay, to back up a bit: I'm arguing that today's LLMs couldn't be agentic even if they wanted to be, so their behavior shouldn't "lower one's p(doom)". Future LLMs (or not-exactly-LLM models), being much more capable and more agentic, could easily just have different properties.
They can write things that sound like workable plans, but they can't, when given LangChain-style abilities, "execute on them" in the way that even moderately intelligent humans can. Like, you can't currently replace a median IQ employe directly with huge context window GPT4, it's not even close. You can often, like, try and chop up that employe's tasks into a bunch of small blocks that GPT-4 can do individually and have a smaller number of employees supervise it! But the human's still acting as an agent in a way that GPT-4 isn't.
I think the alignment concern is something like - once the agents are complex enough to act on plans, that complexity also affects how they motivate and generate those plans, and then you might get misalignment.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link