site banner

Culture War Roundup for the week of April 10, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

14
Jump in the discussion.

No email address required.

Are mottizens just more rational than everyone else, or is it because of chronic contrarianism?

As a pro-nuclear «chronic contrarian»: we can't be relied upon to distinguish the latter from the former. But I'd say it's the diminished vulnerability to threat models that appear poorly substantiated. We don't put much stock in «something may happen» stories.

For the same reason many here tend to pooh-pooh «the coof», Trump's «attempt at fascist insurrection», the danger of Russia or China, AGI risk, climate change, whatever, even school shooting and violence. On the other hand, we are highly suspicious of risk narratives that seem to justify reduction of freedom in all senses – from direct political ones to basic freedoms of exploring space and enjoying material abundance; degrowth ideology doesn't appeal to us at all. Inasmuch as there are conservatives and reactionaries here who profess to respect Chesterton's fences and the precautionary principle, it's not consistent but restricted to domains where change and action is heavily enemy-coded and in some ways still Puritan, statist and restrictive (e.g. CRT programming in schools).

Put another way, we aren't very contrarian. We're just non-neurotic males with a typical masculine attitude toward minor risks and risky-seeming things. The broader society and its consensus is… less like this.

Case in point:

It’s also enraged a bloc of stoutly anti-nuclear countries that includes Germany and Austria. Seven of them wrote a joint letter earlier this month warning that including nuclear-generated hydrogen could “jeopardize the achievement of … climate targets” and reduce ambitions on renewables.

“The attempt to declare nuclear energy as sustainable and renewable must be resolutely opposed,” Austrian Energy Minister Leonore Gewessler said after the deal.

Nuclear is quite bad if 1) you focus on tail risk of disasters (Chernobyl, Three Mile, Fukushima…) or mistaken estimates for base level harmfulness (such as consequences of waste leaks) and/or 2) evaluate nuclear by its cost per unit of output in the context of prohibitively expensive safety measures predicated upon its danger (assessments, plant designs and, again, secure waste storage over millenia). Put in the proper quantitative context, it's less dangerous per unit of power than most other energy sources. But there's no way to make coal or solar seem so spooky to a layman. I mean –wind, sun, it's all so nice, living in harmony with nature, what could go wrong! So what if we'll need to restrain our capitalist greed and consume a little less, give some rest to our mother Earth! Indeed, it'd be a positive if we got rid of capitalism even without any ecological benefit, some could say that's the whole point. Also, the precariousness of nature also means one can feel morally superior on account of normie unambitious urbanite life choices.

The optics accessible to midwits are just bad, built into every facet of culture from fiction tropes about evil power sources to signs on trash containers; whatever your nerdy arguments, generations of shallow artists competing for NGO grants (with the intent to suffocate, debase and diminish humanity under the guise of rational planning) have conscientiously labored to make it this way.

Not much to do about it now but remind them of the human cost of their actions, meticulously calculated.

The broader society and its consensus is… less like this.

Well, yeah; they don't currently perceive the barbarians are at the gates.

And unfortunately for those [men] whom the existence of barbarians is a time-tested way to extract payment and investment from broader society in exchange for security guarantees (and has been since the dawn of humankind), they're correct; this is why the entire society must rationalize its newly-enabled refusal to pay them.

Hence, degrowth as religion; men staying in one's parents' household until they're dead would in a normally-functioning society be hideously perverse, but it's certainly a clear reminder of the human cost of the actions of their social cohort (and probably the rational thing to do in a society like this).

Yes, investing in growth is objectively the right thing to do, and will make the society even stronger in the long run, but why do that when you can just hoard your gains until death takes them from you?

For the same reason many here tend to pooh-pooh «the coof», Trump's «attempt at fascist insurrection», the danger of Russia or China, AGI risk

Do people on the Motte not take AGI risk seriously? I thought I was the only one here who thought it was overblown.

Most people here seem to take it very seriously although metacontrarians exist.

For me, AI risk is completely different to all nearly other x-risks including asteroids, nuclear war, climate change, etc... Because the risk from AI cannot be quantified. I ask myself, what would a superintelligence do? I have no fucking clue. And neither does anyone else. People saying, "I'm not worried about X, I'm worried about Y" are missing the point. While it's fun to speculate about X or Y, it is impossible to predict what a superintelligence will do. It's a true unknown unknown. AI risk is nearly unique in that way.

No, the whole point of what you believe is «metacontrarianism» is that it's entirely possible to predict what a superintelligence will do, when we know what it has been trained for and how exactly it's been trained. Terry Tao is a mathematical superintelligence compared to an average human. What will he do? Write stuff, mainly about mathematics. GPT-4 is a superintelligence in the realm of predicting the next token. What will it do? Predict next token superhumanly well. AlphaZero is a tabletop game superintelligence. What will it do? Win at tabletop games. And so it goes.

Intelligence, even general intelligence, even general superintelligence, is not that unlike physical strength as the capacity to exert force: on its own, as a quantity, it's a directionless, harmless capability to process information. Instrumental convergence for intelligence, as commonly understood by LWers, is illiterate bullshit.

What I admit we should fear is superagency, however it is implemented; and indeed it can be powered by an ASI. But that's, well, a bit of an orthogonal concern and should be discussed explicitly.

I'm sure you know about mesaoptimizers. Care to explain why that doesn't apply to your thesis?

That said, I'm not particularly married to any one particular flavor of AI risk. I'm taking the Uncle Vito approach. The AI naysayers have been consistently wrong for the last 5 years, whereas the doomers keep being proven correct.

I know what people have written about mesa-optimizers. They've also written about the Walugi effect. I am not sure I «know» what mesa-optimizers with respect to ML are. The onus is on those theorists to mechanistically define them and rigorously show that they exist. For now, all evidence that I've seen has been either Goodhart/overfitting effects well-known in ML, or seeing-Jesus-in-a-toast tier things like Waluigi.

To be less glib, and granting the premise of mesa-optimizers existing, please see Plakhov section here. In short: we do not need to know internal computations and cogitations of a model to know that the regularization will still mangle and shred any complex subroutine that does not dedicate itself to furthering the objective.

And it's not like horny-humans-versus-evolution example, because «evolution» is actually just a label for some historical pattern that individual humans can frivolously refuse to humor with their life choices; in model training, the pressure to comply with the objective bears on any mesa-optimizer in its own alleged «lifetime», directly (and not via social shaming or other not-necessarily-compelling proxy mechanisms) . Imagine if you received a positive or negative kick to the reward system conditional on your actions having increased/decreased your ultimate procreation success: this isn't anywhere near so easy to cheat as what we do with our sex drive or other motivations. Evolution allows for mesa-optimizers, but gradient descent is far more ruthless.

…Even that would be something of a category error. Models or sub-models don't really receive rewards or punishments, this is another misleading metaphor that is, in itself, predicated upon our clunky mesa-optimizing biological mechanisms. They're altered based on the error signal; results of their behavior and their «evolution» happen on the same ontological plane, unlike our dopaminergic spaghetti one can hijack with drugs or self-deception. « Reinforcement learning should be viewed through the lens of selection, not the lens of incentivisation».

Humans have a pervasive agency-detection bias. When so much depends on whether an agent really is there, it must be suppressed harshly.


The AI naysayers have consistently been wrong for the last 5 years, where the doomers keep being proven correct.

I beg to differ.

The doomers have been wrong for decades, and keep getting more wrong; the AI naysayers are merely wrong in another way. Yudkowsky's whole paradigm has failed, in large part because he's been an AI naysayer in all senses that current AI has succeeded. Who is being proven correct? People Yud, in his obstinate ignorance, had been mocking and still mocks, AI optimists and builders, pioneers of DL.

You are simply viewing this through the warped lens of Lesswrongian propaganda, with the false dichotomy of AI skepticism and AI doom. The central position both those groups seek to push out of the mainstream is AI optimism, and the case for it is obvious: less labor, more abundance, and everything good we've come to expect from scientific progress since the Enlightenment, delivered as if from a firehose. We are literally deploying those naive Golden Age Sci-Fi retrofuturist dreams that tech-literate nerds loved to poke holes in, like a kitchen robot that is dim-witted yet can converse in a human tongue and seems to have personality. It's supposed to be cool.

Even these doomers are, of course, ex-optimists: they intended to build their own AGI by 2010s, and now that they've made no progress while others have struck gold, they're going to podcasts, pivoting to policy advice, attempting to character-assassinate those more talented others, and calling them derogatory names like «stupid murder monkeys fighting to eat the poison banana».

Business as usual. We're discussing a similar thing with respect to nuclear power in this very thread. Some folks lose it when a technical solution makes their supposedly necessary illiberal political demands obsolete, and begin producing FUD.

Good point about mesaoptimizers and the difference between evolution and gradient descent.

The onus is on those theorists to mechanistically define them and rigorously show that they exist."

Here's where I disagree. As someone once said, "he who rules is he who sets the null hypothesis". I claim that the onus is on AI researchers to show that their technology is safe. I don't have much faith in glib pronouncements that AI is totally understood and safe.

Nuclear power, on the other hand, is well understood, has bounded downside, and is a mature technology. It's not going to destroy the human race. We can disprove the FUD against it. But in 1945, I might have felt differently.

It's not impossible but very hard in practice to prove a negative. You know that anti-nuclear people also demand extremely strong, cost-prohibitive proofs of safety, which is why we're in this mess. Of course, they have other nefarious motives to suppress human flourishing, but so do AI alarmists.

More to the point: decades ago, Nick Bostrom has proposed a taxonomy of X-risks. Those risks should be rigorously compared, for we must hedge all of them somehow. Some of those risks seem highly likely to me, follow from our prior social failures and even particularities of the current trend, and are comparable to «total human death» in moral (if not «utilitarian») badness, so the argument about «risk from AI cannot be quantified» doesn't hold. Bostrom:

While some of the events described in the previous section would be certain to actually wipe out Homo sapiens (e.g. a breakdown of a meta-stable vacuum state) others could potentially be survived (such as an all-out nuclear war). If modern civilization were to collapse, however, it is not completely certain that it would arise again even if the human species survived. We may have used up too many of the easily available resources a primitive society would need to use to work itself up to our level of technology. A primitive human society may or may not be more likely to face extinction than any other animal species. But let’s not try that experiment.

If the primitive society lives on but fails to ever get back to current technological levels, let alone go beyond it, then we have an example of a crunch. Here are some potential causes of a crunch:

5.1 Resource depletion or ecological destruction

The natural resources needed to sustain a high-tech civilization are being used up. If some other cataclysm destroys the technology we have, it may not be possible to climb back up to present levels if natural conditions are less favorable than they were for our ancestors, for example if the most easily exploitable coal, oil, and mineral resources have been depleted. (On the other hand, if plenty of information about our technological feats is preserved, that could make a rebirth of civilization easier.)

5.2 Misguided world government or another static social equilibrium stops technological progress

One could imagine a fundamentalist religious or ecological movement one day coming to dominate the world. If by that time there are means of making such a world government stable against insurrections (by advanced surveillance or mind-control technologies), this might permanently put a lid on humanity’s potential to develop to a posthuman level. Aldous Huxley’s Brave New World is a well-known scenario of this type [50].

A world government may not be the only form of stable social equilibrium that could permanently thwart progress. Many regions of the world today have great difficulty building institutions that can support high growth. And historically, there are many places where progress stood still or retreated for significant periods of time. Economic and technological progress may not be as inevitable as is appears to us.

6.3 Repressive totalitarian global regime

Similarly, one can imagine that an intolerant world government, based perhaps on mistaken religious or ethical convictions, is formed, is stable, and decides to realize only a very small part of all the good things a posthuman world could contain.

Such a world government could conceivably be formed by a small group of people if they were in control of the first superintelligence and could select its goals. If the superintelligence arises suddenly and becomes powerful enough to take over the world, the posthuman world may reflect only the idiosyncratic values of the owners or designers of this superintelligence. Depending on what those values are, this scenario would count as a shriek.

It is counterproductive to focus only on the well-propagandized model of of AI takeover through FOOM, in the age where AI built on principles radically different from those preferred by FOOM-argument-inventors is undergoing its Cambrian explosion; and in doing so exacerbate those Crunch-type risks. It is unprincipled. Moreover, it's wishful thinking: if only we could guard our asses from this one threat model! Perhaps one type of risk is truly greater than another, in raw probability or expected negative value or both. But just rehashing thought experiments about Seed AI from the 90s won't suffice to prove that the orthodox AI risk is the greater evil.

Now Bostrom himself proposes building a 6.3 regime, and Eliezer helpfully paves the way to it through his alarmism about training of capable models. I say we should at least demand they spell out why the possibility of eternity under their benevolent yoke, or fizzling out due to squandering our chances to expand, is preferable to getting paperclipped.

Because for me it is not so clear-cut. And be aware that we can fizzle out. I've argued about this here. We evidently have more than one chance to build an «aligned» (or as I'd rather have it, no-alignment-needed) AGI. We don't have infinite time for globohomo committees to surmount their perverse incentives, discover the true name of God through the game of musical chairs at Davos and immanentize Dath Ilan before proceeding to build said AGI – nor, I'd say, very good odds at aligning those committees to play the game in our interest.

Do people on the Motte not take AGI risk seriously?

I don't; I'm more afraid of the economic enclosure potential that will likely result, to say nothing of the power these tools will bestow upon the State. The last 60 years have been bad for civil rights and that was just the result of normal economic centralization; this, by contrast, is advanced centralization.

I know that I take it seriously, but I don't take it seriously because I think I'm going to be turned into a heap of paperclips or atomized by a T-1000. I take it seriously because I see something else coming, a paradigm shift in propaganda and narrative control powered by LLM's, image/video generators and AI-assisted search engines (I'll confess that I may be a little too unironically Kaczynski-pilled). I don't see how the future I envision is any less apocalyptic than the one our loveable quokkas fear, however.

Did you not see the AI threads the last week? There are plenty of us anti doomers here.