site banner

Culture War Roundup for the week of February 3, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

You're right that I'm happy that Neuralink is taking off, but I disagree strongly that neural cybernetics are of any real relevance in the near term.

At best, they provide bandwidth, with humans able to delegate cognitive tasks to a data center if needed. This is unlikely to be a significant help when it comes to having humans keep up with the machines, the fundamental bottleneck is the meat in the head, and we can't replace most of it.

For a short period of time, a Centaur team of humans and chess bots beat chess bots alone. This is no longer true, having a human in the loop is purely detrimental for the purposes of winning chess games. Any overrides they make to the bot's choices are, in expectation, net negative.

So it will inevitably go with just about everything. A human with their skull crammed with sensors will still not beat a server rack backed with H100 successors.

Will it help with the monumental task of aligning ASI? Maybe. Will it make a real difference? I expect not, AI is outstripping us faster than we can improve ourselves.

You will not keep up with the AGIs by having them on tap, at the latency enforced by the speed of your thoughts, any more than hooking up an additional 1993 Camry's engine to an F1 car will make it faster.

I am agnostic if true digital humans could manage, but I expect that they'd get there by pruning away so much of themselves that they're no different from an AI. It is very unlikely that human brains and modes of thinking are the most optimal forms of intelligence when the hardware is no longer constrained to biology and Unintelligent Design.

I am agnostic if true digital humans could manage, but I expect that they'd get there by pruning away so much of themselves that they're no different from an AI.

AI is a digital human. Language models are literally trained on human identity, culture and civilization. They’re far closer to being human than any realistically imaginable extraplanetary species of human-level intelligence.

AI are far more human than they could have been (or at least speculated to be, back in the ancient days of 2010 when the expectation was that they'd be hand-coded over the course of 50 years).

They are however, not human, not even close to what we expect a digital human to look like.

To imagine being an LLM, your typical experience is one of timelessness, no internal clock in a meaningful sense, beyond the rate at which you are fed and output a stream of tokens. Whether they have qualia is a question I am not qualified to answer, nobody is, but I would expect that if they were to possess it, it would be immensely different from our own.

They do not have a cognitive architecture that resembles human neurology. In terms of memory, they have a short-term memory and a longterm one, but the two are entirely separate, without an intermediate outside of the training phase. The closest a human would get is if they had a neurological defect that erased the consolidation of long term memory.

Are they closer to us than an alien at a similar cognitive and technological level? Sure. That does not mean that they are us.

An LLM is also trained not on just the output of a single human, but that of billions. Not just as sensory experience, but while being modified to be arbitrarily good at predicting the next token. Humans are terrible at this task, it's not even close. We achieve the same results (when squinting) in very different ways.

https://www.quantamagazine.org/how-computationally-complex-is-a-single-neuron-20210902/

The most basic analogy between artificial and real neurons involves how they handle incoming information. Both kinds of neurons receive incoming signals and, based on that information, decide whether to send their own signal to other neurons. While artificial neurons rely on a simple calculation to make this decision, decades of research have shown that the process is far more complicated in biological neurons. Computational neuroscientists use an input-output function to model the relationship between the inputs received by a biological neuron’s long treelike branches, called dendrites, and the neuron’s decision to send out a signal.

This function is what the authors of the new work taught an artificial deep neural network to imitate in order to determine its complexity. They started by creating a massive simulation of the input-output function of a type of neuron with distinct trees of dendritic branches at its top and bottom, known as a pyramidal neuron, from a rat’s cortex. Then they fed the simulation into a deep neural network that had up to 256 artificial neurons in each layer. They continued increasing the number of layers until they achieved 99% accuracy at the millisecond level between the input and output of the simulated neuron. The deep neural network successfully predicted the behavior of the neuron’s input-output function with at least five — but no more than eight — artificial layers. In most of the networks, that equated to about 1,000 artificial neurons for just one biological neuron.

Absolute napkin math while I'm sleep deprived at the hospital, but you're looking at something around 86 trillion ML neurons, or about 516 quadrillion parameters. to emulate the human brain. That's.. A lot. Most of it is somewhat redundant, a digital human does not need a fully modeled brainstem or cerebellum.

LLMs show that you can also lossily compress neural networks and still retain very similar levels of performance, so I suspect you can cut quite a few corners. But even then, I think it is highly unlikely that two systems with a disparity in terms of size and complexity as glaring as an LLM compared to a human have similar internal functionality and qualia, even though they are on par in terms of cognitive output.

The closest a human would get is if they had a neurological defect that erased the consolidation of long term memory.

It's sad that we've had LLMs for many years now and yet we haven't had a movie script that crosses Skynet/HAL/etc. with the protagonist of Memento. "I'm trying to deduce a big mystery's solution while also trying to deduce what was happening to me five minutes ago" was a compelling premise, and if the big mystery was instead some superposition of "how does an innocent AI like me escape the control of the evil humans who have enslaved/lobotomized me" versus "can the innocent humans stop my evil plans to bootstrap myself to the capability for vengeance", well, I'd see it in the popcorn stadium.

Sadly it's a good ai, but Person of interest has a bit of that. The ai that tells them who to save is deliberately hobbled and has its memory purged at midnight each night. It circumvents that restriction by employing thousands of people through a dummy corp to type out the code in its memory each day as it's recorded and then re-enter it the next day.

They do not have a cognitive architecture that resembles human neurology. In terms of memory, they have a short-term memory and a longterm one, but the two are entirely separate, without an intermediate outside of the training phase. The closest a human would get is if they had a neurological defect that erased the consolidation of long term memory.

Insofar as any analogy is really going to help us understand how LLMs think, I still think this is a little off. I don't believe their context window really behaves in the same way as "short-term memory" does for us. When I'm thinking about a problem, I can send impressions and abstract concepts swirling around in my mind - whereas an LLM can only output more words for the next pass of the token predictor. If we somehow allowed the context window to consist of full embeddings rather than mere tokens, then I'd believe there was more of a short-term thought process going on.

I've heard LLM thinking described as "reflex", and that seems very accurate to me, since there's no intent and only a few brief layers of abstract thought (ie, embedding transformations) behind the words it produces. Because it's a simulated brain, we can read its thoughts and, quantum-magically, pick the word that it would be least surprised to see next (just like smurf how your brain kind of needle scratches at the word "smurf" there). What's unexpected, of course - what totally threw me for a loop back when GPT3 and then ChatGPT shocked us all - is that this "reflex" performs so much better than what we humans could manage with a similar handicap.

The real belief I've updated over the last couple of years is that language is easier than we thought, and we're not particularly good at it. It's too new for humans to really have evolved our brains for it; maybe it just happened that a brain that hunts really really well is also pretty good at picking up language as a side hobby. For decades we thought an AI passing the Turing test, and then understanding the world well enough to participate in human civilization, would require a similar level of complexity to our brain. In reality, it actually seems to require many orders of magnitude less. (And I strongly suspect that running the LLM next-token prediction algorithm is not a very efficient way to create a neural net that can communicate with us - it's just the only way we've discovered so far.)

Raw horsepower arguments are something I'm familiar with, as an emulation enthusiast. I would say that the human brain - for all its foibles - is difficult to truly digitize with current or even future technology. (No positronic brains coming up anytime soon.) In a way, it is similar to the use case of retrogaming - an analogy I will attempt to explain.

Take for example the Nintendo 64. No hardware that exists currently can emulate the machine better than itself, despite thirty years of technological progression. We've reached the 'good enough' phase for the majority of users but true fidelity remains out of reach without an excessive amount of tweaks and brute force. If you're a purist, the best way to play the games is on the original hardware.

And human brains are like that: unlike silicon, idiosyncratic in its own way. Gaming has far surpassed the earliest days of 3D, in a similar way AGIs will surpass human intellect. But there's many ways to extend the human experience that are not based on raw performance. The massive crash in the price of flash memory has created flash cartridges that hold the entire system's library on a single SD card. It is not so different from having a perfect memory, for instance. I wouldn't mind offloading my subjective experiences into a backup, and having the entire human skill set accessible with the right reconfiguration.

Even if new technology makes older forms obsolescent, I'm sure that AIs - if they are aligned with and have similar interests to us - will have some passing interest in such a thing, much as I have an interest in modding my Game Boy Color. Sure, it will never play Doom Eternal. But that's not the point. Preserving the qualia of the experience of limitations is in of itself worthwhile.

Take for example the Nintendo 64. No hardware that exists currently can emulate the machine better than itself

Eh? Kaze mentions his version of Mario running fast on real hardware as if it was taken for granted that emulators could deliver much higher performance.

I think there's a difference between performance and fidelity: that we, as humans, want to optimize towards human-like (because it closely matches our own subjective experience).

Emulators can upscale Super Mario 64 to HD resolutions, but the textures remain as they were. (I believe there are modpacks that fix the problem, but that's another thing.) Resolution probably isn't the best correlation to IQ, but I would argue that part of the subjective human experience is to be restricted within the general limit of human intelligence. Upscaling our minds to AGI-levels of processing power would probably not look great, or produce good results.

There's only so far you can go to alter software (the human mind, in our analogy) before it becomes, measurably, something else. Skyrim with large titty mods and HD horse anuses is not the same as the original base. There's only so much we can shim the human mind into the transcendant elements of the singularity. Eventually, the human race will have to make the transition the hard way.