site banner

Culture War Roundup for the week of February 17, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

4
Jump in the discussion.

No email address required.

Do you mean fluid intelligence?

Fluid intelligence is "figuring out a new unfamiliar problem", crystallized intelligence is "accumulating enough learned knowledge that you can apply some of it straightforwardly". IMHO the latter is what LLMs are already really good at, the former is where they're still shaky. I can ask qualitative questions of AIs about my field and get answers that I'd be happy to see from a young grad student, but if I ask questions that require more precise answers and/or symbol manipulation they still tend to drop or omit terms while confidently stating that they've done no such thing. That confidence is why I'd never use a yes/no question as a test; even if one gets it right I'd want to see a proof or at least a chain of reasoning to be sure it didn't get it right by accident.

I noticed this kind of thing as well recently while trying to learn about something I had little clue about, via ChatGPT. I was curious about how important the positioning of the laces on the football was during a field goal/extra point attempt in American football, since I'd heard that the holder needs to put the laces on the outside, i.e. facing away from the kick, and facing towards the goal. It made intuitive sense to me, that the laces being on the surface where the kicker's shoe touches the ball probably adds more randomness than is desired, but I was wondering if the laces facing to the side would affect the kick, particularly in the aerodynamics of how the ball might curve as it flies in the air. And no matter how many iterations of questioning I did, any requests to analyze how laces facing to the side might affect the aerodynamics of the ball in a negative way for accuracy would either get variations of the same tip about the laces having to face away from the kicker or get "confused" with analysis about the football itself being held sideways on the ground (obviously not good for field goals or any attempts to kick the ball high). It seemed that the generally common knowledge among football fans about the laces facing away was about all that the model was trained on, with respect to this particularly niche topic.

I didn't use o1 with chain of thought, though, so maybe I'd get more info if I did, either with that or DeepSeek.

You're right, I'll edit that.

That confidence is why I'd never use a yes/no question as a test; even if one gets it right I'd want to see a proof or at least a chain of reasoning to be sure it didn't get it right by accident.

I understand that mathematicians develop and use jargon for eminently sensible reasons. But they do make things difficult for outsiders, for example, I just screwed up while evaluating LLMs on a maths problem that I thought I remembered the answer to. When my cousin with the actual maths PhD was walking me through the explanation, it made perfect intuitive sense, but I'll be damned if I go through a topology 101 lesson to try and grokk the reasoning involved.

In medicine, and psychiatry, I think current LLMs are >99% correct in answering most queries. Quite often what I perceive is an error turns out to be a misunderstanding on my part. I'm not a senior enough psychiatrist to bust out the real monster topics, but I still expect they'd do a good job. Factual knowledge is more than half the battle.