site banner

Culture War Roundup for the week of February 13, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

10
Jump in the discussion.

No email address required.

I strongly disagree with this. By the same logic human cognition is itself superhuman in virtually every dimension.

Insofar as the model understands that the word “Insofar” with which I began this sentence means the exact same thing as the word “insofar” I just used inside it, it understands this by figuring out that these two “seemingly unrelated” things are “secretly” the same. And it must do that for every single word, separately.

Human brains have their own methods of figuring these things out that probably sound equally ridiculous at the neuron level. Keep in mind that it's not like we have some sort of access to objective truth which AIs are lacking; it's all sensory input all the way down. A human brain is built to operate on long lists of sight and sound recordings rather than long lists of text, but it still builds logical inferences etc. based on data.

The distance between a pixel mess and 7 fingers is vastly bigger than between 7 fingers and 5; the abyss between early gpt token vomit and a wrong but coherent answer to a question is almost infinitely large compared to what remains.

This just isn't true! In fact, I'd argue that it's the exact opposite. There is practically infinite distance between "render 5 fingers" and "render my 5 fingers", where the latter has to either use some vast outside source of data or somehow intuit the current state of the universe from first principles. The former can be as simple as finding images tagged "five fingers" and sharing them, which is something that Google can do without any LLM assistance at all. I recognize this isn't how LLM's work, but my point is that there are plenty of shortcuts that will quickly lead to being able to generate pixel images of fingers but will not necessarily lead to anything more advanced.

A human brain is built to operate on long lists of sight and sound recordings rather than long lists of text, but it still builds logical inferences etc. based on data.

I credit the Innocence Project with convincing me that the human brain is built on inaccurate sight and sound recordings, the Sequences with convincing me that the human brain builds with irrational logical fallacies, and credit Kurt Vonnegut with the quote "the only time it's acceptable to use incomplete data is before the heat death of the Universe. Also the only option."

He never said that, it's okay. He's in heaven now.

Human brains have their own methods of figuring these things out that probably sound equally ridiculous at the neuron level. Keep in mind that it's not like we have some sort of access to objective truth which AIs are lacking; it's all sensory input all the way down.

No, I think we have many ridiculous mechanisms e.g. for maintaining synchrony, but nothing as nonsensical at BPE tokens on the level of data representation. Raw sensory data makes a great deal of sense, we have natural techniques for multimodal integration and for chunking of stimuli on a scale that increases with experience and yet is still controllable. Language is augmented by embodied experience and parsimonious for us; «pixels» and glyphs and letters and words and phrases and sentences and paragraphs exist at once. It can be analogized to CNN, but it's intrinsically semantically rich and very clever. Incidentally I think character-based or even pixel transformers are the future. They'll benefit from more and better compute, of course.

I recognize this isn't how LLM's work, but my point is that there are plenty of shortcuts that will quickly lead to being able to generate pixel images of fingers but will not necessarily lead to anything more advanced.

And my point is that humans are wrong to automatically assume the use of any such shortcuts when an LLM does something unexpectedly clever. We use shortcuts because we are lazy, slow, rigid, and already have a very useful world model that allows us to find easy hacks, like a street speedpainter has masks and memorized operations to «draw the new moon» or something else from a narrow repertoire.

They learn the hard way.

Raw sensory data makes a great deal of sense, we have natural techniques for multimodal integration and for chunking of stimuli on a scale that increases with experience and yet is still controllable. Language is augmented by embodied experience and parsimonious for us; «pixels» and glyphs and letters and words and phrases and sentences and paragraphs exist at once. It can be analogized to CNN, but it's intrinsically semantically rich and very clever.

Sure, I'm plenty willing to accept that the central use cases of the human brain are heavily optimized. On the other hand there are plenty of noncentral use cases, like math, that we are absolutely terrible at despite having processing power which should be easily sufficient for the task. I would bet that many people have math techniques much less logical and efficient than BPE tokens. Similar in other areas--we're so optimized for reading others' intentions that sometimes we have an easier time understanding the behavior of objects, natural phenomena, etc. by anthropomorphizing them.

I suspect similar or greater inefficiencies exist at the neuron level, especially for anything we're not directly and heavily optimized for, but it's impossible to prove because we can't reach into the human brain the same way we can reach into LLM code.

And my point is that humans are wrong to automatically assume the use of any such shortcuts when an LLM does something unexpectedly clever. We use shortcuts because we are lazy, slow, rigid, and already have a very useful world model that allows us to find easy hacks, like a street speedpainter has masks and memorized operations to «draw the new moon» or something else from a narrow repertoire.

They learn the hard way.

Well, I do think they find shortcuts, but shortcuts are just a normal part of efficient cognition anyways. In fact I would characterize cognition itself as a shortcut towards truth; it's impossible to practically make any decisions at all without many layers of assumptions and heuristics. The only perfect simulation is a direct replica of whatever is being simulated, so unless you are capable of creating your own universe and observing the effects of different actions, you must use cognitive shortcuts in order to make any predictions.

There are only more vs less useful shortcuts, and I doubt that any shortcut can even theoretically be more useful than any other without knowledge of the universe the cognitive agent finds itself within. In our universe [the expectation of gravity] is a useful shortcut, but how about the shortcuts used to determine that it's useful? How about the shortcuts used to decide upon those shortcuts? I don't think that from a meta level it is possible to determine which shortcuts will be best; all we can say is that we (as human brains which seem to have been developed for this universe) probably happened to develop shortcuts useful for our circumstances, and which seem more useful than what the AIs have come up with so far.

So the question is not whether AIs are using shortcuts but rather how generalizable the shortcuts that they use are to our current environment, or whether the AI would be capable of developing other shortcuts more useful to a real environment. I think the answer to that depends on whether we can give the AI any sort of long-term memory and real-time training while it retains its other skills.