site banner

Culture War Roundup for the week of September 5, 2022

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

106
Jump in the discussion.

No email address required.

Well this, I'd assume, is because it can't have any way to know what 'rhyming' is in terms of the auditory noises we associate with words, because text doesn't convey that unless you already know the sounds of said words.

Unfortunately, it's a dumber problem than that. Neural nets can pick up a lot of very surprising things from their source data. StableDiffusion can pick up artists and connotations that aren't obvious from its input data, and GPT is starting to 'learn' some limited math despite not being taught what the underlying mathematical symbols are (albeit with some often-sharp limitations). GPT does actually have a near-encyclopedic knowledge of IPA pronunciation, and you can easily prompt it to rewrite whole sentences in phonetic pronunciation. And we're not talking a situation where these programs try to do something rhyme-like and fail, like match up words with large number of letter overlaps without understanding pronunciation. Indeed, one of the limited ways people have successfully gotten rhymes out of it have involved prompting it to explain the pronunciation first. (Though not that this runs into and very quickly fills up the available Attention.) Instead, GPT and GPT-like approaches struggle to rhyme even when trained on a corpus of poetry or limericks: the information is in the training data, it's just inaccessible at the scope the model is working at : either it does transparent copy or it doesn't get very close.

Gwern makes the credible argument that (at least part of) GPT's problem is that it works in fairly weird byte-pair encodings to avoid hitting some of those massively diminishing returns as early as had it been trained on phonetic or character-level minimum units, but at the cost of completely eliminating the ability to handle or even examine certain sub-encoding concepts. It's possible that we'll eventually get enough input data and parameters to just break these limits from an unintuitive angle, but the split from how we suspect human brains handle things may just mean that this scope of BPEs cause bad results in this field and a better work-around needs to be designed (at least where you need these concepts to be examined).

((Other tools using a similar tokenizer have similar constraints.))