This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
Eliezer Yudkowsky has explicitly noted* the alternative solution to this problem:
If you think China is going to destroy the world, the correct solution is not to destroy the world yourself as if RL is a game of DOTA; it's to stop China from destroying the world. Tell them that doing this will end the world. If they keep doing it, tell them that if they don't stop, you'll nuke them, and that their retaliation against this is irrelevant because it can't kill more Americans than the "all of them" that will be killed if they continue. If they don't stop after that, nuke them, and pray that there's some more sanity the next time around.
*To be clear, I was nearly done writing a similar essay myself, because I didn't think he had the guts to spit it out (certainly most top Rats don't). Apparently he did.
The race to develop bioweapons for absolutely no plausible reason continues, even after megadeaths. The AI race is locked in, it's staggeringly naïve to think this genie can be put in the bottle now. AI is profits and power, it's wildly popular and well-used. It's Eliezer and co up against Microsoft, Google, Facebook, DARPA, DoD, ERPers on /g/, kids who don't want to do their homework, a horde of ambitious entrepeneurs around the world, Tencent, Huawei, Nvidia and the Chinese state. Instant loss.
The US can't tell China 'stop or we nuke', they are themselves doing it right now bigger and better than anyone else! It's not framed as a race, it is a race and was always going to be a race. Politicians are uber-boomers and don't have the balls to go all in on anything like this with their lives, reputations and eternal legacy. They can't be totally sure that ASI means death. You only find out that ASI means death when it kills you.
I'd prefer this technology not to be developed at all. It's a terrible decision on a species-level. We spent hundreds of thousands of years wiping out our siblings in the Homo genus, we earned our Sapiens title and sole dominance of the world. Now we want to introduce a new contestant? Are we insane? But the dynamics demand it. When considering the balance of powers involved, the most Yud and co can hope for is to smooth out the edges a little bit, don't go all in on a strategy with 0 chance of success.
From where I sit, hoping for neural net alignment is itself a strategy with ~0 chance of success. Reality is under no obligation to give you a "reasonable" solution.
More options
Context Copy link
More options
Context Copy link
Just to be clear, since this is a very common misconception, Eliezer advocated conventional airstrikes on GPU clusters, not a nuclear first strike. He brought up nuclear war because you have to be willing to do it even if the rogue datacenter is located inside a nuclear power like Russia or China and military action therefore carries some inherent risk of going nuclear. But most people read that paragraph and rounded it to "Eliezer advocates nuking rogue GPU clusters", because of course they did.
He elaborates on this on the two addenda to that Times piece that he posted on Twitter, as seen on the LessWrong edition of the article.
I know what he said, but I was deviating slightly; I think that given the Chinese IADS and given a committed-to-AGI CPC (note that this latter is a condition that I do not think is necessarily true IRL), there is probably no way to actually destroy the Chinese capacity to pursue AGI without nuclear attacks (against the IADS, but also against the datacentres themselves; it's not like the CPC doesn't have the resources to put them inside conventional-proof bunkers if it fears an attack, after all, and actually invading China to put an end to things that way is roughly in the realm of "either you drop a couple of hundred nukes on them first to soften them up, or this is as much of a non-starter as fucking Sealion") and even if there were a way to do it conventionally, this almost certainly exceeds the threshold of damage that would get the Chinese deterrent launched. Thus, it is a lot more pragmatic to simply open up with a nuclear alpha strike; you know that this ends in a nuclear exchange anyway, so it's best to have it on your terms. I would agree that it's best to keep to conventional weapons if e.g. Panama were to try to build Skynet.
I'm not advocating Nuclear War Now IRL, because the situation posited is not the real situation; the USA has not made the offer of a mutual halt to AI, and I find it fairly likely that such an offer would actually be accepted (it's not like the CPC wants to end the world, after all; they're way up the other end of "keep things under control and stable, no matter the cost"). To the extent I'm less opposed to nuclear war than I'd otherwise be, it's because I suspect that the gameboard might be in an unwinnable state - and mostly on the US side, because of too much of US discourse being held on platforms controlled by AI companies (YouTube, Facebook, Twitter are all owned by companies/people that also do AI, and devices themselves are mostly Microsoft/Apple/Google OSes which also do AI; the latter is relevant because e.g. the Apple Vision Pro is designed to function as a brainwashing helmet) and Andreessen Horowitz having potentially captured/bribed the Trump admin on AI policy - making a mulligan seem like it would probably lower P(Doom). I'm not going to go out and start one for that reason, though, even if I knew how; Pride is my sin, and it's not even close, but I still don't have that much of it.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link