site banner

Culture War Roundup for the week of April 7, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

3
Jump in the discussion.

No email address required.

most positive realistic scenario I can think of involves steady, gradual progression to superintelligence - widely distributed. Google, OpenAI, Grok and Deepseek might be ahead but not that far ahead of Qwen, Anthropic and Mistral (Meta looks NGMI at this point). A superintelligence achieved today could eat the world but by 2027, it would only be first among equals.

If it turns out that our current approach to AI fizzles out at von-Neumann IQ levels, then all is good as historically, that is not sufficient intelligence to take over the world. In that case, it does not matter much who reaches the plateau first -- sure, it will be a large boon to their economy, but eventually AI will just become a commodity.

On the other hand, if AI is able to move much beyond human levels of intelligence (which is what the term "superintelligence" implies), then we are in trouble. The nightmare version is that there are unrealized algorithmic gains which let you squeeze out much more performance out of existing hardware. Someone tells an AI cluster to self-improve one evening, and by morning, that AI is to us as we are to ants.

In such a scenario, it is winner takes all. (Depending on how alignment turns out, the winner may or may not be the company running the AI.) The logical next step is to pull up the ladder which you just have climbed. Even if alignment turns out to be trivial, nobody wants to give North Korea a chance to build their own superintelligence. At the very least, you tell your ASI to backdoor all the other big AI clusters. It does not matter if they would have achieved the same result the next night, or if they were lagging a year behind.

(Of course, if ASI is powerful enough, it might not matter who wins the race. The vision the CCP has for our light cone might not all be that different from the vision Musk has. Does it matter if we spread to the galaxy in the name of Sam Altman or Kim Jong Un? More troublesome is the case where ASI makes existing militaries functionally obsolete, but does not solve scarcity.)

How valuable is intelligence?

One data point that I've been mulling over: humans. We currently have the capability to continue to scale up our brains and intelligence (we could likely double our brain size before running into biological and physical constraints). And the very reason we evolved intelligence in the first place was that it gave adaptive advantage to people who have more of it.

And yet larger brain size doesn't seem to be selected for in modern society. Our brains are smaller than our recent human ancestors' (~10% smaller). Intelligence and its correlates don't appear to positively affect fertility. There's now a reverse Flynn effect in some studies.

Of course, there are lots of potential reasons for this. Maybe the metabolic cost is too great; maybe our intelligence is "misaligned" with our reproductive goals; maybe we've self domesticated ourselves and overly intelligent people are more like cancer cells that need to be eliminated for the functioning of our emergent social organism.

But the point remains that winning a game of intelligence is not in itself something that leads to winning a war for resources. Other factors can and do take precedence.

This assumes that something like human level intelligence, give or take, is the best the universe can do. If super intelligence far exceeding human intelligence is realizable on hefty GPUs, I don't think we can draw any conclusions from the effects of marginal increases in human intelligence.

I agree with you, I think that there were diminishing returns on intelligence in the ancestral environment. If your task is to hunt mammoths, then a brain capable of coming up with quantum field theory is likely not going to help much.

we could likely double our brain size before running into biological and physical constraints

Today, sure. (Not that we have identified the genes which we would have to change for that. Also, brain size is not everything, it is not the case that a genius simply has a much larger brain than an average person.)

In the ancestral environment, I don't think so. Giving birth is already much more difficult for humans than for most other mammals, and the cause is the large cross-section of the head.

I think you need to have a clear idea of what "intelligence" even means before you can start to assess how valuable it is.

As one thinker just posted on Truth Social an hour ago:

THE BEST DEFINITION OF INTELLIGENCE IS THE ABILITY TO PREDICT THE FUTURE!!!

/images/17446546611532226.webp

I've been pulling heads out of very stretched vaginas for the past week, and suspect there are biological reasons other than metabolism that larger head size is selected against.
This might go away if we got rid of the sexually antagonistic selection that's limiting larger hip sizes in women.

Human heads used to be bigger, though. And childbirth is much less likely to result in death now than before, thanks to human intelligence and the heroic efforts of professionals like yourself. And if increases in intelligence did offer a significant reproductive benefit, larger hips that enabled that intelligence would be selected for.

Bigger faces as adults, due to e.g. much larger jaws iirc. Don't think head size at birth was much different, was it?

The nightmare version is that there are unrealized algorithmic gains which let you squeeze out much more performance out of existing hardware. Someone tells an AI cluster to self-improve one evening, and by morning, that AI is to us as we are to ants.

This implies that it is possible to self-improve (e.g. to become more intelligent) with limited interactivity to the real world.

That is a contentious claim, to say the least.

This is one of several areas where the consensus of those who are actively engaged in the design and development of the algorithms and interfaces breaks sharply with the consensus of the less technical, more philosophically oriented "AI Safetyism" crowd.

I think that coming from "a world of ideas" rather than "results", guys like Scott, Altman, Yudkowski, Et Al. assume that the "idea" must be where all the difficulties reside and that the underlying mechanisms, frameworks, hardware, etc... that make an idea possible are mere details to be worked out later rather than something like 99.99% of the actual work.

See the old Carl Sagan quote about in order to make an apple pie "from scratch" you would first have create a universe with apples in it.

Indeed.

And while I don't claim particular expertise such that my opinion ought to be given too much weight, but I'm with Feynman when he said it doesn't matter how nice your idea is, you have to go test it and find out.

I think the problem is that we still lack a fundamental theory about what intelligence is, and quantifiable ways to measure it and apply theoretical bounds. Personally, I have a few suspicions:

  • "Human intelligence" will end up being poorly quantified by a single "IQ" value, even if such a model probably works as a simplest-possible linear fit. Modern "AI" does well on a couple new axes, but still is missing some parts of the puzzle. And I'm not quite sure what those are, either.
  • Existing training techniques are tremendously inefficient: while they're fundamentally different, humans can be trained with less than 20 person-years of effort and less than "the entire available corpus of English literature." I mean, read the classics, man, but I doubt reading all of Gibbon is truly necessary for the average doctor or physicist, or that most of them have today.
  • There are theoretical bounds to "intelligence": if the right model is, loosely, "next token predictor" (and of that I'm not very certain), I expect that naively increasing window size helps substantially up to a point, and at some point your inputs become "the state of butterfly wings in China" and are substantially less useful. How well can (generally) "the next token" be predicted from a given quantity (quality?) of data? Clearly five words won't beget the entirety of human knowledge, but neither am I convinced that even the best models are very bright as a function of how well read they are, even if they have read all of Gibbon.

If it turns out that our current approach to AI fizzles out at von-Neumann IQ levels, then all is good as historically, that is not sufficient intelligence to take over the world.

Well, we don't know. We ran this experiment with one von Neumann, or maybe a handful, but not with a datacenter full of von Neumanns running at 100x human speed. While we don't know if the quality of a single reasoner can be scaled far beyond what is humanly possible, with our understanding of the technology it is almost certain that the quantity will (as in, we can produce more copies more cheaply and reliably than we can produce copies of human geniuses), and within certain limits, so will the speed (insofar as we are still quite far from the theoretical limit of the speed at which current AI models could be executed, just using existing technology).

What makes you think there are huge unrealized wins in unknown algorithmic improvements. In other domains, e.g. compression, we've gotten close to the information theoretic limits we know about (e.g. Shannon limits for signal processing), so I'd guess that the sustained high effort applied to AI has gotten us close to limits we haven't quite modeled yet, leaving not much room for even superintelligence to foom. IOW, we humans aren't half bad at algorithmic cleverness and maybe AIs don't end up beating us by enough to matter even if they're arbitrarily smart.

What makes you think there are huge unrealized wins in unknown algorithmic improvements.

I don't think that it is the case, just that it is possible. I called it the nightmare version because it would enable a very steep take-off, while designing new hardware would likely introduce some delay: just as even the worlds most genius engineer in 2025 can not quickly build a car if he has to work with stone age tech, an ASI might require some human-scale time (e.g. weeks) to develop new computational hardware.

You mention compression, which is kind of a funny case. The fundamental compressibility of a finite sequence is its Kolmogorov complexity. Basically, it is impossible to tell if a sequence was generated by a pseudo-random number generator (and thus could be encoded by just specifying that generator) or if it is truly random (and thus your compression is whatever Shannon gives you). At least for compression, we have a good understanding what is and what is not possible.

Also, intuition only gets us so far with algorithmic complexity. Take matrix multiplication. Naively done, it is O(n^3), and few people would suspect that one can be better than that. However, the best algorithm known today is O(n^2.37), and practical algorithms can easily achieve a scaling of O(n^2.81). "I can not find a algorithm faster than O(f(n)), hence O(f(n)) is the complexity class of the problem" is not sound reasoning. In fact, the best lower bound for matrix multiplication is Omega(n^2).

For AI, things are much worse. Sure, parts of it is giant inscrutable matrices, where we have some lower bounds for linear algebra algorithms, but what we would want would be a theorem which gives an upper bound for the intelligence given a certain net size. While I only read Zvi occasionally, my understanding is that we do not have a formal definition of intelligence, never mind one which is practically computable. What we have are crude benchmarks like IQ tests or their AI variants (which are obviously ill-suited for appearing in formal theorems), but they at most give us lower bounds what on what is possible.

Kolmogorov complexity is, IMO, a "cute" definition, but it's not constructive like the Shannon limit, and is a bit fuzzy on the subject of existing domain knowledge. For lossy compression, there is a function of how much loss is reasonable, and it's possible to expect numerically great performance compressing, say, a Hallmark movie because all Hallmark movies are pretty similar, and with enough domain knowledge you can cobble together a "passable" reconstruction with a two sentence plot summary. You can highly compress a given Shakespeare play if your decompression algorithm has the entire text of the Bard to pull from: "Hamlet," is enough!

I'm pretty sure von Neumann could have quite easily taken over the world if he could have copied himself infinite times and perfectly coordinated all his clones through a hive mind.

Completely ignoring scaling of agents is weird.

I think that there is some truth to what you and @4bpp are pointing out: the expensive part with an LLM is the training. With the hardware you require to train your network (in any practical time), you can then run quite a few instances. Not nearly an infinite amount, though.

Still, I would argue that we know from history that taking over the world through intelligence is a hard problem. In the cold war, both sides tried stuff which was a lot more outlandish than pay the smartest people in their country to think of a way to defeat their opponent. If that problem was solvable with one von-Neumann year, history would be different.

Also, in my model, other companies would perhaps be lagging ten IQ points behind, so all the low hanging fruits like "write a software stack which is formally proven correct" would already be picked.

I will concede though that it is hardly clear that the von Neumanns would not be able to take over the world, and just claim that it would not be a forgone conclusion like it would be with an IQ 1000 superintelligence.

Does a pretrained, static LLM really measure up to your "actually von Neumann" model? Real humans are capable of on-line learning, and I haven't seen that done practically for LLM-type systems. Without that, you're stuck with whatever novel information you keep in your context window, which is finite. It seems like something a real human could take advantage of against today's models.

Setting aside the big questions of what machine intelligence even looks like, and whether generative models can be meaningfully described as "agents" in the first place.

The scale of even relatively "stupid" algorithms like GPT would seem to make the "hard takeoff" scenario unlikely.

Hilarious comment to read considering von Neumann gave his name to von Neumann probes.

Yeah, but he couldn't, and didn't. There's no reason to believe that a von Neumann level supercomputer can marshal the resources necessary to create a clone, let alone an infinite number of clones.

Yeah, but he couldn't, and didn't.

Yes, he was a flesh and blood human that died before the invention of reliable human cloning. (and cloning doesn't produce an identical copy of the genius adult, it produces a delayed-birth identical twin that needs to be raised in order to be smart).

There's no reason to believe that a von Neumann level supercomputer can marshal the resources necessary to create a clone, let alone an infinite number of clones.

Apart from the fact that "cloning" an instance of software is as simple as just starting the same program again (on a different machine in this case)? If your stupidest co-worker can do it, it seems like a fair bet that von Neuman could, too.

It can easily clone the software, but not a machine that can run it.

Von Neumann was not a supercomputer, he was a meat human with a normalish ≈20W power consumption brain, ie 1/40th of a modern GPU. This is proof that if you can emulate an idiot, there exists an algorithm of a very similar computation intensity that gets you a Von Neumann.

That's a pretty non-von Neumann thought to have, my fellow clone of von Neumann.

Call me the emperor of drift lol