site banner

Culture War Roundup for the week of June 10, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

7
Jump in the discussion.

No email address required.

I mean yes, but I’d consider a 10-15 IQ test differential to be fairly minor, it’s one σ at best. It’s there, but unless you’re doing high level stuff, I don’t think most people would be able to tell the difference at a glance between IQ 115 and IQ 100. At lower IQ it makes a difference sure, but for average IQ levels it’s not that much.

I don’t think most people would be able to tell the difference at a glance between IQ 115 and IQ 100. At lower IQ it makes a difference sure, but for average IQ levels it’s not that much.

How about 100 versus 85, which as far as I know is approximately the gap actually in question?

AFAIK as IQ is deliberately intended to be normalized, the gap is exactly the same 85 to 100 as 100 to 115, and if you think that those two aren't the same, you are also inherently saying that IQ is the wrong tool for the job. That's not even getting into the whole "what benchmark do we set 100 at", do we update it year to year or try to peg it to some historical benchmark (though this is not necessarily fatal to IQ as a metric in the same way the first is, it does present a question that must be addressed when using IQ).

AFAIK as IQ is deliberately intended to be normalized, the gap is exactly the same 85 to 100 as 100 to 115, and if you think that those two aren't the same, you are also inherently saying that IQ is the wrong tool for the job.

I'm not sure what you're even trying to get at here. A fifteen point gap at the higher end of the distribution isn't likely to be noticeable in everyday casual interaction, but at the lower end it can be the difference between someone who can at least tell you their own name and a total potato.

That's not even getting into the whole "what benchmark do we set 100 at", do we update it year to year or try to peg it to some historical benchmark (though this is not necessarily fatal to IQ as a metric in the same way the first is, it does present a question that must be addressed when using IQ).

The people who want to pooh-pooh the utility of IQ as a metric have had the reigns of society for generations now and their attempts at racial uplift have been a humiliating failure.

Simply using IQ necessitates that you grapple with these things. That's the nature of using numbers to describe something human. You, the invoker of IQ, need to prove the numbers work as numbers and aren't better being left as philosophical concepts or practical examples, or point to something well established that does. The simple, self-evident fact that IQ is fit to a normal curve and you yourself don't seem to believe that a symmetric 15 point gap is equal across the domain is in and of itself a tacit admission that IQ is the wrong tool. Are you familiar with the statistical notions of how an assigned number scale can be nominal, or ordinal, or interval, or ratio? It's not a perfect paradigm by any means, but it's one you must grapple with at least on some level, and happens to be incredibly relevant here in this case. See also OP's initial claim that the distribution has a weird asymmetric tail, also evidence (though more mild) against using IQ as the correct tool. Similarly, the fact that you dodge the 100-center question, which is a fundamentally important question to the use of IQ, is not acceptable.

I mean, I get the whole all models are wrong but some are useful, but these are just the very basics, the fundamentals, they are not nitpicks. An example of something that at least does attempt to address these issues and mostly succeeds is the Likert scale. You might be familiar with it. It's the classic 5 or 7 point scale in response to a question, with "strongly agree" and "slightly agree" and "not sure" and disagree options. There's a natural zero, and at least psychologists attempt to say that the distance between each point is "equal". I know forced normalization distorts this equal-distance formulation slightly, in terms of the math, but two properties that persist across the transformation are the aforementioned symmetry of responses, and also the center point of responses. These two decisions are non-negotiable and mandatory to make and cannot be hand-waved away. They are inherent to the math and the use of a numerical model.

The simple, self-evident fact that IQ is fit to a normal curve and you yourself don't seem to believe that a symmetric 15 point gap is equal across the domain is in and of itself a tacit admission that IQ is the wrong tool

Imagine two groups of children go to a themepark and want to hop onto a rollercoaster, which has a minimum height requirement of 105cm. One group of children has heights that range from 110cm to 150cm, and the other group has heights that range from 70cm to 110cm. The symmetric 40cm gap would not be equal across the domain - is this a tacit admission that height is the wrong measuring tool for the job? A simple 15cm boost would have a different effect on each group's ability to ride the rollercoaster, so how can you say that 1cm is equal to 1cm between the two groups?

This is why a statistics background is helpful. The core idea behind creating an IQ score is this: Raw test scores are forcefully molded into something that looks like a bell curve. That's IQ generation. That's the analogy here of "slapping a number on it". To be clear, anyone can take any dataset ever and make it look like a Bell curve by following this same methodology.

Careful! The thing being measured is by no means required to have a normal distribution itself! Height is actually a bad example here, because we know that due to whatever mechanisms go into a person's adult height, that human population heights are actually approximately normal (I say approximately because technically the normal has an infinite domain, and in practice we never see humans beyond a certain min and max height, but the core part of the distribution is without a doubt very normal; tails beyond a certain number of standard deviations are not as often considered, and are generally not here either for our purposes). In our height example, the difference in both a raw height of 1 cm is the same as a "normal" height of 1 cm. However, in IQ, if you score 1 point different in the raw test data, this could translate to a .5 IQ or a 2 IQ difference depending on where your particular score was forced into a normal distribution, i.e. how far up or down you are, and vice-versa. As I said, it's not symmetric.

Choosing height as your example is actually perfect, then, because it demonstrates that you indeed do not understand IQ on a fundamental level.

So IQ is necessarily ordinal in the sense that the mechanism is ranking and stacking up test-takers, relative to some benchmark. But they never are interval (implying equal-measures) in the sense that neither the core, raw test scores nor IQ are necessarily going to fairly represent what we actually want it to be: intelligence. A misunderstanding betrayed in everything upthread. It gets worse, too, when you consider that most IQ test designers have deliberately messed with the test design to make the test results fit their nice bell curve easier. You can see how this leads to some circular logic and reasoning. This is a MATH ISSUE. You cannot hand-wave it away!

For more, please see for example a technical discussion here, with an excerpt:

It may sound weird to define IQ so that it fits an arbitrary distribution, but that's because IQ is not what most people think it is. It's not a measurement of intelligence, it's just an indication of how someone's intelligence ranks among a group:

The I.Q. is essentially a rank; there are no true "units" of intellectual ability.

[Mussen, Paul Henry (1973). Psychology: An Introduction. Lexington (MA): Heath. p. 363. ISBN 978-0-669-61382-7.]

In the jargon of psychological measurement theory, IQ is an ordinal scale, where we are simply rank-ordering people. (...) It is not even appropriate to claim that the 10-point difference between IQ scores of 110 and 100 is the same as the 10-point difference between IQs of 160 and 150.

[Mackintosh, N. J. (1998). IQ and Human Intelligence. Oxford: Oxford University Press. pp. 30–31. ISBN 978-0-19-852367-3.]

When we come to quantities like IQ or g, as we are presently able to measure them, we shall see later that we have an even lower level of measurement—an ordinal level. This means that the numbers we assign to individuals can only be used to rank them—the number tells us where the individual comes in the rank order and nothing else.

That's a lot of words but I think I may not have been sufficiently clear when making my point because this reply doesn't get at the core of my objection, which doesn't have anything to do with the distribution of the score. I'm aware that height doesn't follow the same distribution curves as IQ, but the point of using height was to make the concept of a threshold or minimum requirement more obvious.

AFAIK as IQ is deliberately intended to be normalized, the gap is exactly the same 85 to 100 as 100 to 115, and if you think that those two aren't the same, you are also inherently saying that IQ is the wrong tool for the job.

The point being made by your interlocutor is not that the 15 points of IQ between 100 and 115 matter more than the 15 between 85 and 100, but that it can be harder to tell the difference externally. To wit:

A fifteen point gap at the higher end of the distribution isn't likely to be noticeable in everyday casual interaction, but at the lower end it can be the difference between someone who can at least tell you their own name and a total potato.

A single casual conversation can give you enough information to determine whether or not someone meets a certain low threshold for IQ, in much the same way that a sign in front of a rollercoaster with a single black line at the height threshold can tell you if someone meets the minimum requirement or not. The fact that the single threshold of the sign gives you a bit of information about the height of people who compare themselves to it but is unable to give you more information about the distribution of heights among people who pass does not mean that a centimetre is measured differently above or below the threshold.

I guess we're in pretty deep, but the original framing was this: a user said any intelligence differences were minor and also hard to tell, another user posts a study claiming basically that we could actually tell, and the OG user said yes that's still a minor difference, I can't tell 100 vs 115. Third user chimes in and says ok, 85 to 100 is the actual difference which implies in context that this 15 point gap is not minor and/or is obvious in casual conversation. At that point I respond about how 115 to 100 != 100 to 85 and that IQ is a stupid measure, which spawned a few subthreads.

Perhaps I conflated a few users or responded to the wrong argument, in retrospect (at that juncture), which maybe contributed to some unwieldy downthread organization. I apologize if my digression the last two replies wasn't strictly relevant, but it does bear mentioning with respect to my overall point. The whole conversation, I was trying to point out, is a stupid one. IQ has poisoned the debate. People want IQ to represent intelligence but it doesn't do so in a very rigorous way, and asymmetry is one of those ways. There are numerous methodological problems as well as fundamental interpretability problems. In a strict sense, sure, we were originally only comparing 100 to 85 and trying to consider it in context. That's kind of fine. The original wikipedia article (about the adoption study) does mention grappling with some but not all of these problems, and only methodologically. And actually, if you look carefully at the article, the parents were selected precisely because of having an IQ of 115 or higher. Maybe it's not a perfect point, but I want to point out that these misconceptions clearly pop up in many places including study design! And this study design did, in fact, have a number of significant issues, despite on its face being a perfect poster child experiment, for example as discussed here.

Inasmuch as we're more narrowly talking about casual conversation, sure, maybe the "test" of "can casual conversation detect someone 85 IQ or below" is perfectly valid and usable as a test (i.e. detecting if someone is sitting somewhere within the binned bottom 16ish percent of IQ scores, on that particular test), though maybe I've misconstructed the test: Is it instead "can you detect the difference between a 100 and 85 IQ" or "can you detect an 80-90 IQ from a 110-120 IQ person in casual conversation" or "can you identify someone's IQ within 15 points in casual conversation, provided they are not over 100" or and many, many other variations... Some might sound similar but from a math perspective might have drastically different actual implications, and even more problems crop up the second we try to equate specific IQ test scores as intelligence directly.

You, the invoker of IQ, need to prove the numbers work as numbers and aren't better being left as philosophical concepts or practical examples, or point to something well established that does.

Funny, I don't think I do. What I think is that we have a testable metric which at the population level correlates reliably and meaningfully with both racial groupings and life outcomes, that this fact has substantial explanatory power, and that these correlations will persist regardless of what you think of that metric.

What I think is that fifty or sixty years of failed racial uplift will in due time turn into seventy, and then eighty, and then ninety, a hundred, so on and so forth indefinitely, barring the apocalypse or some sort of massive sci-fi technological intervention. I think that when you and I are both dead from old age, the exact same group will still be hopelessly behind, and someone somewhere will still be doing this exact same tap dance.

They'll still be just as eager to repeat how racial categories can have fuzzy boundaries, still ever so philosophically uncertain of the value of IQ as a metric. But that uncertainty will never allow them to raise a proportionately equal share of brain surgeons, or what have you, from the population at the bottom of the ranking.

Funny, yes you do. That's literally how using statistics and math to evaluate social problems works. You can't just slap a number system onto something and call it good. That's Stats 101. Even some more advanced numerical analysis can sometimes reach the literal opposite conclusion if done or designed incorrectly. On that same note, explanatory power is also insufficient if your categories are fundamentally flawed. Why? Because what you want to do with your model matters. Even if you are trying to be predictive vs just descriptive of the past changes some potentially vital assumptions. It's, mathematically, just wild to vigorously defend arbitrary categories that have demonstrated flawed mechanisms and so-so generalizability just because it happens to kind of work as an explanation. That's not rigor, it's agenda, quite frankly. And we are talking about IQ as a tool, and you defend it based on... some non-sequiter argument about how people in charge are ignoring it or something?

You can hand-wave away the fuzzy boundary problem all you want in an observational sense, but it actually matters a lot (this is underselling it, it's literally foundational) IF you want to use IQ as a tool of making actual proscriptive, "do this and not this" kind of arguments based on what it tells you. Such as exhibit A: do we continue, change, intensify, stop, etc. "racial uplift" efforts?

I additionally think, as a factual matter, claiming that all efforts to help Black people in the last 50-60 years have failed is a pretty wild and weakly supported take. As far as I know the proportional wealth gap for example stalled out more in the 1980s or so, so 45 years, but your language seems to imply this is a more longstanding. Perhaps a more useful question I have for you then would be, at what point historically do you think things presumably got 'fair enough' that you can say "well we tried and failed so it must be their fault not ours"? I assume that is your actual argument, yes? That we as a society tried and failed at "racial uplift" and no use throwing good money after bad, that kind of thing?

My grandpa told me a story last year about how while he was growing up his dad decided to quit being a realtor because he was so mad at the realtor's association refusing to allow him to sell a house to a nice, well-off Asian couple the house they wanted because of explicit redlining. He was willing to go to bat for them and fight it and they just gave up. Redlining for example only became technically illegal in 1968 and sure as hell didn't magically stop overnight. You know, the main way Americans build generational wealth. Sound familiar? Don't get me wrong, there's sure a limit to assistance, and personally I favor a more, well not entirely race-blind approach, but certainly a more targeted approach that mainly focuses on wealth as it is rather than other groupings.

I favor a more, well not entirely race-blind approach, but certainly a more targeted approach that mainly focuses on wealth as it is rather than other groupings.

Have you ever tried looking up achievement scores controlled for both race and income? The results are pretty gruesome across the board. Even if you completely leave aside all but the wealthiest blacks, you're left with a group that can compete with only the very poorest whites, while being crushed outright by even the most poverty-stricken Asians.

From an HBD point of view this is all totally unsurprising, but the other side is perpetually left scrambling for some kind of bullshit mini structural micro systemic racism that makes the family of a wealthy black doctor worse off than that of the Asian immigrant who scrubs their shitter for them. It's horseshit. It's a bad joke, and I don't need to fuck around in the weeds over statistical methodology to call it a joke.

I don't consider a Google Images search to be a well-sourced claim, and even beyond that, averages don't capture the whole picture. Which you might know, if you didn't have a complete disdain for statistical methodology. This isn't the weeds, it's the absolute bedrock basics, which I keep trying to tell you. Do you have an actual source? Do you recognize that your stated claims above don't even match the data you yourself hint at?

More comments

You can't just slap a number system onto something and call it good. That's Stats 101. Even some more advanced numerical analysis can sometimes reach the literal opposite conclusion if done or designed incorrectly. On that same note, explanatory power is also insufficient if your categories are fundamentally flawed.

Not that I disagree, just want to make sure we're on the same page: the way I see it you just doused every single social science with gasoline, and are standing above them with a lit match in your hand. I don't mind getting rid of the HBD hypothesis this way, but I want to make sure that when the whole thing is done, you're not going to try and tell me that that little pile of ashes where "implicit bias" used to stand, is still a valid structure.

Actually a great, great question. In one sense, you are right that social sciences do in fact have some potential, deep-seated issues. Education is particularly thorny among them. As a field, they have figured out some methods to cope, which I will say sadly not all who undertake research fully understand, though the best do. The basic and relevant point here is that for a "measure" to be accepted in the social sciences, there are a few helpful traits we might look for. You might find the wikipedia page on psychological statistics and its cross-references including test validity interesting.

To steal a reddit ELI5:

So, is [IQ] accurate? Depends on what you mean by "accurate." Two concepts researchers often talk about are reliability and validity. A test is reliable if it gives consistent results in similar situations. For instance, I should be able to take comparable IQ tests in January, then again in July, and not get a very different score; likewise, if twin brothers of similar intelligence take the test, they should score about the same. IQ tests tend to be pretty reliable; your score on one test shouldn't vary by more than a few points over time. Different tests do sometimes give fairly different results for individuals, though.

Validity, on the other hand, refers to whether the test really measures what it claims to. This is where things get really controversial. IQ definitely correlates with some of the things you'd expect intelligence to correlate with: positively with performance in school/work/income, and negatively with crime. Still, that's not the same as proving it measures intelligence. The only thing you can be pretty sure about is that IQ tests do a good job at measuring IQ.

Most of my arguments here in this thread have to do with some variation of validity. In other words, people are taking a tool and using it for something that it is not designed to do, and in fact incompatible with that use. Unlike physical reality, where you could use a hoe instead of a shovel to dig a hole and it would just be annoying and take more time, in the math realm when you do try to do something a tool fundamentally cannot do, the risks are higher and things not working as expected are also higher. You might, for example, use a shovel that only digs up dirt with the intention to create a hole but in fact leaves clay and gravel behind and the job only half-done, because it turns out the shovel literally only picked up dirt. I hope that's an okay analogy, there's probably a better one. Researchers themselves often grapple with these questions all the time (hopefully literally all of the time, but you know how modern science has its flaws too). To put a simple example, your bread and butter linear regression assumes, as a core assumption, that you've selected all the variables that matter. If you left something out, you get well, not "bad" results exactly, but ones that will possibly not help you accomplish what you want to accomplish, and at worst might send you the wrong direction entirely. See: multicollinearity, confounders, and test design in multiple linear regression for more resources there.

That's not to say social sciences are the worst sciences and are all doomed. If you read many actual papers, it's not uncommon for a paper to specifically examine these arguments in detail. That's why I bring up the Likert scale, which I still do have a handful of issues with, but is relatively common and well-established and for which advances and self-questioning continues. See, for example this paper. IQ by contrast has a much more difficult history and variation in both implementation and interpretation, along with a tendency for misunderstandings. In my personal, subjective opinion, the actual worst field of science is much of food research, but that's a thread for another day.

All the above basically concerns the fundamental misunderstandings about IQ. As evidenced by the many downvotes my posts have accumulated, sadly such fundamental misunderstandings continue. "Why don't I notice a 85 to 100 the same as 100 to 115 intelligence difference" as a question indicates that the question is faulty in at least one way, not that they have somehow discovered some deep principle of life. As to my allusion to fuzzy categories being similarly relevant and something we can't ignore, that's more of a fundamental psychometric and math question, along the lines of "how to lie with statistics". My points there were not questioned so I feel no need to go into detail there.

More comments

You, the invoker of IQ, need to prove the numbers work as numbers and aren't better being left as philosophical concepts or practical examples, or point to something well established that does.

Lewis Terman (of Stanford-Binet fame) and his many successors (and predecessors) have done that. IQ isn't some crank idea.

The simple, self-evident fact that IQ is fit to a normal curve and you yourself don't seem to believe that a symmetric 15 point gap is equal across the domain is in and of itself a tacit admission that IQ is the wrong tool.

The reason for this belief was explained; you've ignored the explanation. If you're just e.g. having casual conversation with someone, you may well not be able to detect someone at 100 from someone at 115, but could detect someone at 100 from someone at 85. This does not at all mean there's something unequal about IQ. It just means such casual conversation is not particularly intellectually taxing. The symmetry you claim must exist is not, in fact, a requirement.

As for "nominal, ordinal, interval, ratio", the Wikipedia page on that actually cites IQ as an example of an ordinal measurement, though I am not sure Terman would agree.

Right but it feels like you're assuming that somehow white people have no gains to be made. I think that assumption would fail on the same grounds you would fail those who presume that blacks have no gains to be made.

I'd argue that in the developed world, nobody has any gains to be made. We've removed lead from gasoline, famine and malnutrition are distant memories. In terms of IQ, we've picked all the low-hanging fruit. If there was a way to actually increase a child's IQ beyond avoiding stressors like malnutrition or poisoning, the tiger mothers and educational establishment would have found it by now.

I don't disagree. I just think it's easier to argue the point that there is no stated upper limit given by folks that argue what MaiqTheTrue argues. Since their position, in my experience of arguing against similar ones, is ultimately not based on objective thinking or anything related to the real world but rather moral preference.

When you push motivated egalitarians far enough they will simply resort to impossible to prove theories and assumptions, be that prenatal environment, systemic racism or whatever else. It's much quicker to simply ask them why they expect all of their confounding factors that can never be tested to only be able to affect black people. It helps highlight how the proposition that we could possibly increase IQ doesn't do much for equality.