This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
Human intelligence, and super-human intelligence of the sort generated by coordination between many humans, sure. That's the intelligences we have available to us to observe.
Sure, as a hypothetical, it's certainly a scary one. As I said, if all three are exponential, then AI very well may be an X-risk. I don't actually think all three are exponential, though, and there's no evidence to really decide the question either way.
And you don't have a basis to say that it can. We haven't actually demonstrated that it's even possible to build a general intelligence with our current or foreseeable tech base. Maybe we're close to accomplishing that, and maybe we're not, though I'll readily admit that we seem to be making reasonably good progress toward that goal of late.
I think it's entirely possible that such a human would never find even a slight improvement, because the possibility space is simply too vast. Compare your model to one of a human having unlimited chances to guess a 64-character alphanumeric string. The standard assumption is that improving the human brain is simpler than guessing a 64-character alphanumeric string, but given that we've never actually done it, I'm completely baffled at where this assumption comes from in others. I certainly know where it came from when my younger self held it, though: I read a lot of sci-fi that made the idea seem super-cool, so I wanted to believe it was true, so I did.
As for a machine, I entirely understand the concept of a general intelligence undergoing recursive self-improvement. There's an actual concrete question there of how much room to grow there is between the minimum-viable and maximally-efficient code, and we don't know the answer to that question. Then there's the question of how much intelligence the maximally-efficient version provides, which we also don't know. Hard takeoff assumes that each improvement enables more improvements in an exponential fashion, but that's not actually how the world I observe works. All complex systems I observe involve low-hanging fruit and diminishing returns once they are exhausted.
I disagree. It seems obvious to me that the rate of technological progress has slowed significantly over my lifetime, and I think it reasonable to suppose that this trend is likely to continue into the future. I think it's at least possible that we are already pushing up against basic physical constraints. A lifetime of observing technological mirages like the battery and fusion power breakthroughs that have been ten years away for seventy years and counting indicates to me that some of these problems are legitimately hard, and that the future ahead of us isn't going to look like the steam > electricity > electronics > code ages we've enjoyed over the last few centuries. The developments are observably slowing down.
Maybe AGI will change that. Alternatively, maybe it won't. We don't actually know. It's easy to see ways that it could, given certain assumptions, but that is not proof that it will.
But let's say I concede all of the above: AGI is probably coming, and it's at least a plausible X-risk that we should be concerned about. What then?
I'd rather not move on to the second question until you've actually conceded the first question, instead of just "let's say".
But... the AI systems we have today are capable of finding large improvements through the same principle of trial and error. Your "absence of empirical evidence" has already failed. For that matter, evolution already found out how to improve the human brain with trial and error.
The claim that the third exponential is necessary rests on the idea that humanity could only be beaten by something much smarter than us if it had much more advanced technology AND that much more advanced technology will never come.
The first half of that is something that I could imagine an average joe assuming if he didn't think about it too much or if his denial-systems were active, but the second half is extremely fringe.
Large improvements in a human mind or in a human-equivalent AI mind? I'm pretty sure they haven't.
Sure. But your assumption is that there's lots of headroom for further improvements, and in point of fact evolution hasn't found those.
I highlight the third exponential because it underlies so many descriptions of the AI endgame. IIRC, Yudkowski has publicly assigned a non-zero probability to the idea that an AI might be able to hack the universe itself exclusively through executing code within its own operating environment. I'm not arguing that a superintelligent AI can't beat humanity without an overwhelming tech advantage; maybe it can, maybe it can't, though I think our odds aren't uniformly terrible. I'm arguing that most AI doomer persuasion hinges on science-fiction scenarios that may not be physically possible, and some that almost certainly aren't physically possible.
I do not know whether much more advanced technology will come, and neither do you. I think that the more our reasoning is based on the imagination rather than the empirical, the less reliable it becomes. I observe that predictions about future technology are extremely unreliable, and do not see a reason why these particular predictions should be an exception. More generally, serious tech improvements appear to me to be dependent on our current vastly interconnected and highly complex global society maintaining its present state of relative peace and prosperity, and that seems unlikely to me.
This is the sort of argument someone from 1524 would use to explain why they doubted they could be beaten by an army from 2024. It does not matter what specific hypothetical future technologies you think are implausible. The prediction of doom does not rely on that.
To use another example, it is like someone asking an expert how a chess engine will beat them at chess. As in what exact sequence of moves stockfish will use to win. The expert could give an example of a way the chess engine might beat them, but the fact that the chess engine will win isn't reliant on it pursuing any of the strategies hypothesized by the expert, even if the expert names dozens of them. And even if you can point to one of the strategies and say "that definitely won't work", stockfish doesn't need that one particular strategy to work, nor even any of the strategies the expert comes up with.
Saying "most of these possible technologies probably won't be possible even by something that is farther above humanity than humanity is to squirrels" is missing the point. Not even one of the possible technologies mentioned needs to be actually possible, it's all downstream of the important parts of the argument.
You are basically saying that humanity could not ever lose. Which contrasts with your prediction of the breakdown of society at large through human folly alone and little desire on the part of humanity for that to happen.
No, I'm not. At no point in this exchange have I said or even implied that humanity can never lose. AGI doesn't need to be superintelligent to have a non-zero chance of wiping out humanity, and possibly even a very high chance. Humans could probably wipe out humanity if they were bent on it, why couldn't a human-equivalent or even subhuman AGI do the same?
It is not evident that "something that is farther above humanity than humanity is to squirrels" is a thing that can actually exist. It's entirely possible that such a thing can't exist; intelligence might be bound by diminishing returns. It's entirely possible that such a thing could exist, but it will be beyond our reach to create for the foreseeable future.
The "prediction of doom" is about building an idea in the reader's mind of a problem that only has one solution. To that end, the threat is specifically and arbitrarily described to exclude all other possible solutions. It is worth noting that this is what these arguments are actually about, to take ten big steps backward and ten more to the side, and look around the edges of the picture that is being painted across the entirety of one's field of view.
The assumed disparity of the AGI versus humans is exactly what I'm pointing out has no factual basis. It is entirely theoretical, based on a chain of suppositions that may or may not actually be valid, which I have tried to summarize in the three points above. You are simply recapitulating the premise, but the premise is what I am questioning. What if the AGI isn't as smart compared to us as we are to squirrels, because intelligence scaling doesn't work that way? What if it can't bootstrap itself into super-advanced technology, because we're already pushing local maxima?
The point of these predictions and speculations is to convince the listener that AGI is a horrifying threat, and alignment is the only solution. The meta-argument is simple: "Imagine something vast and malevolent that will do horrible things to you and everyone else, which you are absolutely powerless to stop." Very well, I've imagined it. Imagining it makes it neither real nor inevitable. It being plausible is not the same as it being certain, or even likely. There is, as I've noted in a recent conversation, a crucial difference between "we can prove this is true" and "we can't prove this is false". Omnipotent AGI is firmly within the later category, but it seems to me that most AI doomers speak and act as though it is in the former. I decline to do the same.
And suppose that I'm wrong, and Malevolent AGI arrives, and wipes us all out. We will then have suffered... roughly the same fate every human before us has suffered since the advent of the species. We will have each, as individuals, lived a life and then died a death. There are some who would consider the abrupt and final termination of our species a mercy; I would strongly disagree, but the point is not a trivial one.
I disagree. It's like asking a chess expert whether a computer could beat a human at chess, before the invention of computers. The correct answer to that question is "I don't know."
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link