This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
A heads-up: Yudkowsky will be talking AI with YouTube long-timer Ross Scott on the 3rd. This comes after Ross's last videochat with fans [warning: long, use the timestamps in one of the comments to skip around], where AI and Big Yud came up.
I expect that Ross will be able to wring some sort of explanation about AI risk out of Yudkowsky that will be palatable to the everyman. Ross has talked about things like Peak Oil before (here's an old, old video on the subject), so I think it will be interesting to see. I'll have to see if I can find out Ross's position on AI risk so far.
Yud should stop trying to convince the everyman. I’m not saying that no one should do that, just that Yud specifically should not bother with it. It’s not good for his mental health or morale to be dealing with people this dumb. It clearly disturbs him on an emotional level.
More options
Context Copy link
Maybe it's just because I'm not in on the game enough, or that I'm getting bored, or that I'm a little too honest about being stupid, but I'm starting to get the same kind of vibes from these interviews as I get from listening to one too many interviews with 'science popularizers' and physicists talking about black holes and solar systems or whatever. At some point the endless stream of analogies, abstractions and hypothetical arguments just starts sounding like a 2 hour poem about math that I don't understand.
You can assure me it makes sense. You can explain to me how this new and exciting theory of the universe, that hinges entirely on mathematical assumptions, is like dumping a gallon of milk into a box of cereal before pouring it into the bowl, and I can maybe relate to that analogy because I know milk and cereal. But, again, at the end of the day I will never be able to relate that analogy to what is actually being talked about because all that's really there is theoretical math I don't understand.
These conversations seem to follow a similar but slightly different path of, there's no actual math, just assumptions being made about the future. The AI man says we are doomed if we continue. Here's a powerful analogy. Here's technobabble about code... Like, dude, you got me, OK? This appeals to my vanity for coffee table philosophical arguments and you are a credentialed person who sounds confident in your convictions. I guess we are doomed. Now, who is the next guest on Joe Rogan? Oh, science man is going to tell me about a super massive black hole that can eat the sun. Bro, did you know that a volcanic eruption in Yellowstone park could decimate the entire planet? This doctor was talking about anti-biotics and...
I don't want to come across as too belligerent, but all this stuff just seems to occupy the same slot of 'it feels important/novel to care'. I'm not going to pretend to understand or care any more than I would care about Yellowstone. I'll accept all the passionate believers telling me that they told me so when the inevitable mega-earthquakes happen.
But until then I'll just continue enjoying the memes that predate our inevitable apocalypse with the same urgency that the people worrying over AI show when enjoying yet another 4 hour interview, followed by days of more rigorous debate, over the ever encroaching extinction level threat that is AI.
1/2
Your broad impression is correct with one massive caveat: there's no there, there. It is about milk and cereal, and the pretense that the analogy simplifies some profound idea is a pose, it serves to belittle and bully you into meek acceptance of the conclusion which is not founded on some solid model applying to the bowl and the universe alike. Yud's opinions do not follow from math, he arrived at them before stumbling on convenient math, most other doomers don't even understand involved math, and none of this math says much of anything about AI we are likely to build.
It's important to realize, I think, that Yud's education is 75% Science Fiction from his dad's library and 25% Jewish lore in cheder he flunked out of. That's all he learned systematically in his life, I'm afraid; other than that he just skimmed Kahnemahn, Cialdini and so on, assorted pop-sci, and some math and physics and comp sci because he is, after all, pretty smart and inclined to play around with cute abstractions. But that's it. He never had to meet deadlines, he never worked empirically, he never applied any of the math he learned in a way that was regularized against some real-world benchmark, KPI or a mean professor. Bluntly, he's a fraud, a simulacrum, an impostor.
More charitably, he's a 43-year-old professional wunderkind whose self-perception hinges on continuing to play the part. He's similar to Yevgeny «Genius» «Maestro» Ponasenkov, a weird fat guy who LARPs as a pre-Revolutionary noble and a maverick historian (based). Colloquially these people are known as freaks and crackpots, and their best defense for the last two millenia is that Socrates was probably the same but he became Great; except he did not LARP as anyone else.
I know this dirty observation is not polite to make among Rationalists. I've talked to really smart and accomplished people who roll their eyes when I say this about Yud, who object «come on now, you're clowning yourself, the guy's some savant – hell, I've got a Ph.D in particle physics and won at the All-Russian Math Olympiad, and he's nobody but talks jargon like he understands it better than my peers» and I want to scream «you dumb defenseless quokka, do you realize that while you were grinding for that Olympiad he was grinding to give off signals of an epic awesome Sci-Fi character?! That for every bit of knowledge, he gets a hundredfold more credit than you, because he arranges it into a mask while you add to the pearl of your inner understanding? That the way Yud comes across is not a glimpse of his formidability but the whole of it? Can you not learn that we wordcels are born with dark magic at the tips of our tongues, magic you do not possess, magic that cannot remake nature but enslaves minds?»
Ahem.
Let's talk about one such analogy, actually the core analogy he uses: it's about human evolution and inclusive genetic fitness. AGI Ruin: A List of Lethalities, 5th Jun '22:
Point 16, Misalignment In The Only Precedent We Know About, is a big deal. There are 46 points in total, but it's a bit of a sham: many about AGI being smart, politics of «preventing other people from building an unaligned AGI», handwringing in 39-43, «multiple unaligned AGIs still bad», other padding. Pretty much every moving part depends on the core argument for AI being very likely to «learn wrong» i.e. acquire traits that unfold as hazardous out of (training) distribution, and the 16th corroborates all of such distributional reasoning in B.1 (10-15). 17-19, arguably more, expound on 16.
Accordingly, Yudkowsky cites it a lot and in slightly varied forms, e.g. on Bankless, 20th Feb 23:
On Fridman, 20th March '23:
(Distinguishing SGD from an evolutionary algorithm with the mention of «calculus» is a bit odd).
And on Twitter, April 24th 2023 :
It's not just Yudkowsky these days but e.g. Evan Hubinger, AI safety research scientist at Anthropic, the premier alignment-concerned lab, in 2020.
And Yud's Youtube evangelist Rob Miles, Apr 21, 2023:
2/2
Note that this evo-talk is nothing new. In 2007, Eliezer wrote Adaptation-Executers, not Fitness-Maximizers:
The framing (and snack choice) has subtly changed: back then it was trivial that the «blind idiot god» (New Atheism was still fresh, too) does not optimize for anything and successfully aligns nothing. Back then, Eliezer pooh-poohed gradient descent as well. Now that it's at the heart of AI-as-practiced, evolution is a fellow hill-climbing algorithm that tries very hard to optimize on a loss function yet fails to induce generalized alignment.
I could go on but hopefully we can see that this is a major intuition pump.
It's a bad pump and Evolution is a bad analogy for AGI: inner alignment. Enter Quintin Pope, 13th Aug 2022.
Or putting this in the «sharp left turn» frame:
Put another way: it is crucial that SGD optimizes policies themselves, and with smooth, high-density feedback from their performance on the objective function, while evolution random-walks over architectures and inductive biases of policies. An individual model is vastly more analogous to an individual human than to an evolving species, no matter on how many podcasts Yud says «hill climbing». Evolution in principle cannot be trusted to create policies that work robustly out of distribution: it can only search for local basins of optimality that are conditional on the distribution, outside of which adaptive behavior predicated on stupid evolved inductive biases does not get learned. This consideration makes the analogy based on both algorithms being «hill-climbing» deceptive, and regularized SGD inherently a stronger paradigm for OOD alignment.
But Yud keeps making it. When Quintin wrote a damning list of objections to Yud's position (using Bankless episode as a starting point), a month ago, he brought it up in more detail:
Compare, Yud'07: «Cognitive causes are ontologically distinct from evolutionary causes. They are made out of a different kind of stuff. Cognitive causes are made of neurons. Evolutionary causes are made of ancestors.» And «DNA constructs protein brains with reward signals that have a long-distance correlation to reproductive fitness, but a short-distance correlation to organism behavior… We, the handiwork of evolution, are as alien to evolution as our Maker is alien to us.»
So how did Yud'23 respond?
Then he was pitched the evolution problem, and curtly answered the most trivial issue he could instead. «And that's it, I guess».
So the distinction of (what we in this DL era can understand as) learning policies and evolving inductive biases was recognized by Yud as early as in 2007; the concrete published-on-Lesswrong explanation why evolution is a bad analogy for AI training dates to 2021 at the latest; Quintin's analysis is 8+ months old; this hasn't had much effect on Yud's rhetoric about evolution being an important precedent supporting his pessimism, nor on the conviction of believers that his reasoning is sound.
It seems he's just anchored to the point, and strongly feels these issues are all nitpicks, and the argument should still work, one way or another, at least it proves that something-kinda-like-that is likely and therefore doom is still inevitable – even if evolution «does not use calculus», even if the category of «hill-climbing algorithms» is not informative. He barely glanced at what gradient descent does, and concluded that it's an optimization process, thus he's totally right.
His arguments, on the level of pointing at something particular, are purely verbal, not even verbal math. When he uses specific technical terms, they don't necessarily correspond to the discussed issue, and often sound like buzzwords he vaguely associated with it. Sometimes he's demonstrably ignorant about their meaning. The Big Picture conclusion never changes.
Maybe it can't.
This is a sample from dunk on Yud that I drafted over 24 hours of pathological irritation recently. Overall it's pretty mean and unhinged and I'm planning to write something better soon.
Hope this helps.
Bad take, except that MAML also found no purchase, similar to other Levine's ideas.
He directly and accurately describes evolution and its difference from current approaches, but he's aware of a wide range or implementations of meta-learning. In the objections list he literally links to MAML::
And the next paragraph on sharp left turn:
Yuddites, on the other hand, mostly aren't aware of any of that. I am not sure they even read press releases.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I admit this is surprising. I would've predicted the Butlerian Jihad movement deprioritizing Yud as a crank who may blurt out some risky political take, but he establishes himself more and more as the Rightful Caliph. Have Yuddites discovered a stash of SBF's lunch money to buy a bunch of podcasters, including some crypto has-beens looking for a new grift? Or is this simply a snowball effect, where Yud is becoming more credible and attractive the more he goes to podcasts?
On the other hand, this is all show for the plebs anyway; Policy people never lack for experts to cite. And «rationalists» can straight up lie to their audiences even about words of those experts.
I should accelerate my work on a dunk on Yudkowsky's whole paradigm, even though it honestly feels hopeless and pointless. If anyone has better ideas, I'm all ears.
Yud is a useful idiot. Not in the sense that he's stupid or even that his arguments are wrong: their truth or well-reasoned-ness is entirely besides the point. AI is clearly important, and people (including people with political power) are worried about it and the threat it poses to them. Amplifying a weird, neurotic extremist yelling on the sidelines about paperclips provides useful cover for more "restrained" control over AI: on one side is Yud, on the other is careless AI libertarians hellbent on either destroying civilization or making revenge porn of their exes, and in the middle are helpful folks like the FTC, EEOC, and CFPB who offer careful, educated policies to expand their power to protect the people from unlawful bias and other harmful outcomes.
What makes useful idiots useful is that they are not dumb, just clueless or naive. So probably good description of the non grifter portion of AI concerned people.
I don't think Yud is either, he is just optimised for maintaining his grift. I'm not even sure if he is aware what is happening, people usually aren't. In fact the more intelligent people are the more susceptible they seem to be to their own arguments.
People subconsciously drink their own coolaid and move goalposts around and use arguments as soldiers so they don't have to change. Its hilarious how someone so into "rationalism" could be such an antithesis of his stated goal.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Yud was right, Kulak was wrong. Way to get things done is from above, through influencing elites and aspiring elites, not through impressing normies with handsome looks and smart fashion, not through doomed direct action.
This is only the beginning. The Katechon pact is coming, hide your laptop.
More options
Context Copy link
Yud's message is aligned with the powers that be, so his voice will be magically amplified by the algorithm. The state is scrambling to ramp up their AI capabilities. They need the boot on any ambitious small companies in the form of a "six month pause". Yud thinks he's advocating for a less dangerous arms race, in reality he's just helping the most dangerous people catch up.
This makes sense if you consider that Yud takes Roko's Basilisk seriously. He's clearly realized this is his best contribution to its existence.
Well, how Big Yud reacted back then when Roko posted his idea on Less Wrong?
Called it wrong?
No, Yud went into full loud screaming mode.
https://basilisk.neocities.org/
and then put total ban on any further basilisk discussions on LW.
Not a reaction of someone who is not even slightly worried.
If big Y dissmissed this thing or just stayed silent, the whole idea would be forgotten in few days like other LW thought experiments. Streissand effect bites hard even if you are super genius.
Sure it is. Yudkowsky is exactly the sort of person who would be outraged at the idea of someone sharing what that person claims is a basilisk, regardless of whether he thinks the specific argument makes any sense. He is also exactly the sort of person who would approach internet moderation with hyper-abstract ideas like "anything which claims to be a basilisk should be censored like one" rather than in terms of PR.
Speaking or writing in a way where it's difficult to use your statements to smear you even after combing through decades of remarks is hard. It's why politicians use every question as a jumping off point to launch into prepared talking-points. Part of Yudkowsky's appeal is that he's a very talented writer who doesn't tend to do that, instead you get the weirdness of his actual thought-processes. When presented with Roko's dumb argument his thoughts were about "correct procedure to handle things claiming to be basilisks", rather than "since the argument claims it should be censored, censoring it could be used to argue I believe it, so I should focus on presenting minimum attack-surface against someone trying to smear me that way".
https://archive.is/nM0yJ
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I recommend Ross' old Deus Ex review/retrospective where he discussed various conspiracy theories in detail.
Ross is well suited to these kinds of discussions. He's the best.
To be fair it is sort of cheating when you have a time machine.
More options
Context Copy link
More options
Context Copy link
Now this is a weird crossover I didn't expect. What's next, Angry Joe interviewing Nick Land?
...Holy shit, I kinda want that. I mean, I never cared for Angry Joe all that much, but still.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link