This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
This is an excellent answer. One small quibble:
For the record I think Yudkowsky and friends are wrong about this one. Control of the only superintelligent AGI, if that AGI is a single coherent entity, might be the keys to the lightcone, but so far it looks to me like AGI scales horizontally much better than it scales vertically.
This, if anything, makes things more dangerous rather than less, because it means there is no permanent win condition, only the deferral of the failure condition for a bit longer.
Thanks!
This particular concern hinges on recursive self-improvement, and I agree that we haven't seen much evidence of that, yet, but it's still the early days.
I think that the intelligence of LLMs needs to at least reach that of the average ML researcher capable of producing novel research and breakthroughs before we can call it one way or another, and we're not there yet, at least in terms of released models, not that I expect Gemini or GPT-5 to be that smart yet. The closest I can think of is training LLMs on synthetic data curated by other models, or something like Nvidia using ML models to optimize their hardware, but that's still weaksauce.
If it turns out to be feasible, it still remains to be seen whether we have a hard take-off with a Singleton or a slow (yet fast on human timescales, just months or years) takeoff which might allow for multipolarity. I remain agnostic yet gravely concerned myself.
And most of the talk on that issue assumes that the point where said self-improvement hits steep diminishing returns must necessarily be somewhere far above human intelligence — again, apparently based on nothing beyond it being more conducive to one's preferred outcomes than the alternative.
Diminishing returns != no or negative returns. Intelligence is the closest thing we have to an unalloyed good, and the difference in capabilities between people with just 20 or 30 IQ points is staggering enough.
Nothing at all suggests that the range of IQ/intelligence seen in unmodified humans constrained by a 1.4 kg brain in a small cranium applies at all to an entity that spans data-centers, especially those that can self-modify and fork themselves on demand. You don't need a bazillion IQ points to be immensely dangerous, human scientists with maybe 160 or 170 invented nukes.
We have AI that already matches human intelligence on many or even most cognitive tasks, the scaling laws still hold, and companies and nations can easily afford to throw several OOMs more money at the problem.
Humanity itself has seen exponential or even super-exponential advancement, and we've barely gained a handful of IQ points from the Flynn effect, most of it was merely technological compounding.
Since the theoretical or practical upper limits on the size and speed of an AGI are massive, I wish to see what reason anyone has to claim they'll bottom out within spitting distance of the smartest humans. That is ludicrous prima facie, even if we don't know how fast further progression will be.
Yes, but you're assuming there's a lot more even more dangerous things "out there" for a smarter entity to discover.
What is intelligence for? That is, what is its use? Primarily,
Our first day of Physics lab classes at Caltech, the instructor told us that it doesn't matter how many digits of pi we'd all memorized (quite a bunch), just use 3.14, or a scientific calculator's pi key, whichever was faster, because any rounding error would be swamped out by the measurement error in our instruments.
When it comes to modeling the physical world, sure, going from knowing, say, Planck's constant to two decimal places to knowing it to three decimal places will probably net you a bunch of improvements. But then going from, say, ten decimal places to eleven, or even ten decimal places to twenty, almost certainly won't net the same level of improvement.
When modeling other minds, particularly modeling other minds modeling you modeling… — the whole "I know that you know that I know…" thing — well, that sort of recursion provides great returns on added depth… in certain games, like chess. But AIUI, in most other situations, that kind of thing quickly converges to one or another game-theoretic equilibrium, and thus the further recursion allowed by greater intelligence provides little additional benefit.
I'm not saying we can't produce an intelligence "that spans data-centers" much smarter than us, and I'm not saying it's impossible that there are dangerous and powerful things such an intelligence might figure out, I'm just saying it can't just be assumed, or treated as highly likely by default. That it's unsupported extrapolation to reason 'smart=nukes, therefore super-smart=super-nukes and mega-smart=mega-nukes.' I'm not saying that machine intelligence will "bottom out within spitting distance of the smartest humans," I'm saying that it's possible that the practical benefits of such intelligence, no matter how much vaster than our own, may "bottom out" well below the dreams of techno-optimists like yourself, and you can't just rule that out a priori on an unsubstantiated faith that there's vast undiscovered realities beyond our limited comprehension just waiting for a smarter being to uncover.
I want you to at least consider, just for a moment, the idea that maybe we humans, with our "1.4 kg brain[s] in a small cranium," may have a good enough understanding of reality, and of each other, that a being with "900 more IQ points" won't find much room to improve on it.
I'm not saying "a machine can never be smarter than a man!" I'm saying "what if a machine a thousand times smarter than us says, 'yeah, you already had it mostly figured out, the rest is piddly details, no big revelations here'?"
I repeat that, while I think this is true, it's still not necessary for a genius AI to be an existential risk. I've already explained why multiple times.
Nukes? They exist.
Pandemics? They exist. Can they be made more dangerous? Yes. Are humans already making them more dangerous for no good reason? Yes.
Automation? Well underway.
I do not think that the benefits of additional intelligence as seen even in human physicists is well addressed by this analogy. The relevant one would be comparing Newtonian physics to GR, and then QM. In the domains where such nuance becomes relevant, the benefits are grossly superior.
For starters, while the Standard Model is great, it still isn't capable of conclusively explaining most of the mass or energy in the universe. Not to mention that even if we have equations for the fundamental processes, there are bazillions of higher-order concerns that are intractable to simulate from first-principles.
AlphaFold didn't massively outpace SOTA on protein folding by using QM on a molecule by molecule basis. It found smarter heuristics, that's also something intelligence is indispensable for. I see no reason why a human can't be perfectly modeled using QM, it is simply a computationally intractable problem even for a single cell within us.
In other words, knowing the underling rules of a complex system != knowing all the potential implications or applications. You can't just memorize the rules of Chess and then declare it's a solved problem.
I'm sure there people who might make such a claim. I'm not one of them, and like I said, it's not load bearing. Just nukes is sufficient really. Certainly in combination with automation so the absence of those pesky humans running the machines isn't a problem.
I have considered it, at least to my satisfaction, and I consider it to be exceedingly unlikely. Increases in intelligence, even within the minuscule absolute variation seen within humans, is enormously powerful. There seems to be little to nothing in the way of further scaling in the context of inhuman entities that are not constrained by the same biological limitations in size, volume, speed or energy. They already match or exceed the average human in most cognitive tasks, and even if returns from further increases in intelligence diminish grossly or become asymptotic, I am the furthest from convinced that stage will be reached within spitting distance of the best of humanity, or that such an entity won't be enormously powerful and capable of exterminating us if it wishes to do so.
But I don't see why super-intelligent AI will somehow make these vastly more dangerous, simply by being vastly smarter.
Based on what evidence?
Again, I agree, but again, further scaling in intelligence≠further scaling in power.
Again, so what? What part of "greater ability in cognitive tasks"≠"greater power over the material world" are you not getting — beyond, apparently, your need for it to be so based on you tying so much of your own ego to your own higher-than average intelligence?
Based on what evidence?
My degree is in physics. Yes, there's problems with the Standard Model. But there's far from any guarantees that whatever might replace it will provide anything like as revolutionary as those previous changes, particularly when it comes to practical effects.
Maybe I've just listened to Eric Weinstein go on about needing to put vast amounts of funding into physics too many times, because he never stops to consider that the "revolutionary new physics" we "need" to become "interplanetary" just aren't there. And then what?
Do me a favor, while I'm perfectly happy addressing your points, the amount of effort it would take exceeds what I'm willing to do for the sake of just one person or the handful who are reading week old threads this deep, so I suggest you make a new top-level post on the new thread, where I will happily continue the debate.
I'll check back shortly after I'm done studying, you're welcome to either link or post excerpts from my comments or summarize them as you see fit, I see no particular reason to think you'd twist them in bad faith.
If you want, I can do the same myself, but like I said, I really should be studying, if only till the Ritalin wears off haha.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link