This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
I disagree, it's largely Yudkowsky who vocally claims that a SAGI will rely on things like "diamondoid bacteria" and other nanotech to get an advantage.
For me, and many others, subversion of existing human infrastructure through social engineering to do things like launch nukes, engineering hyper-lethal and virulent pathogens and the like are all feasible for something modestly above human, without relying on anything that doesn't exist. The AI will need robust automation to replace humans, but we're already doing that ourselves, so..
We could already have had energy too cheap to meter if we went full send on nuclear, for one. It would certainly be dirt cheap compared to today's rates.
I think this is overrated, too — though that might be due to reading too many "unboxing" arguments predicated on the assumption that absolutely anyone can be convinced to do absolutely anything, if only you're smart enough to figure out the particular individually-tailored set of Magic Words.
I have never claimed it can convince anyone of literally anything. We've already had plenty of nuclear close-calls simply because of the fog of war or human/technical error.
Similarly, there are already >0 misanthropically omnicidal people around and kicking, and an AI could empower them to pursue their goals, or they might choose to adopt the AI for that purpose.
Mere humans, or human-run orgs like the CIA have long engineered regime change, it seems to me incredibly unlikely, to the point it can be outright dismissed from consideration, that an AGI only modestly higher in intelligence couldn't do the same, and even independently play multiple sides against each other until they all make terrible decisions.
Besides, it's clear that nobody even tries the Yudkowskian boxing approach these days. ARC evals, red-teaming and the like are nowhere close to the maximally paranoid approach, not even for SOTA models.
A group of say, 160 IQ humans with laser-focus and an elimination of many/most of the coordination and trust bottlenecks we face could well become an existential threat. Even a modestly superintelligent or merely genius level AGI can do that and more.
Empower them how, exactly? What is it that they aren't able to do now only because they're not smart enough, that more intelligence alone can solve? Intelligence isn't magic.
Perhaps, but what's your proof that it could do this so much better than the CIA or anyone else, just because it's smarter? Intelligence isn't magic.
Actually, as a 151 IQ human, I mostly disagree with this, so that's part of it right there.
What's your proof of the part I just emphasized? You appear to simply assume it.
I think you might be a uniquely ineffective 151 IQ human if it doesn't seem plausible to you that a group of very smart humans could do extreme and perhaps existential harm. To me, the main thing preventing that seems to be not the inherent hardness or weakness of, say, COVID-Omicron-Ebola, but the resistance of an overwhelming majority of other humans (including both very smart ones and mediocre but well-organized ones).
As for what a superintelligent AI changes? Well for one thing, it eliminates the need to find a bunch of peers. And, with robots, the need for lab assistants.
And I have like 3% P(AI Doom).
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link