This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
Funny, yes you do. That's literally how using statistics and math to evaluate social problems works. You can't just slap a number system onto something and call it good. That's Stats 101. Even some more advanced numerical analysis can sometimes reach the literal opposite conclusion if done or designed incorrectly. On that same note, explanatory power is also insufficient if your categories are fundamentally flawed. Why? Because what you want to do with your model matters. Even if you are trying to be predictive vs just descriptive of the past changes some potentially vital assumptions. It's, mathematically, just wild to vigorously defend arbitrary categories that have demonstrated flawed mechanisms and so-so generalizability just because it happens to kind of work as an explanation. That's not rigor, it's agenda, quite frankly. And we are talking about IQ as a tool, and you defend it based on... some non-sequiter argument about how people in charge are ignoring it or something?
You can hand-wave away the fuzzy boundary problem all you want in an observational sense, but it actually matters a lot (this is underselling it, it's literally foundational) IF you want to use IQ as a tool of making actual proscriptive, "do this and not this" kind of arguments based on what it tells you. Such as exhibit A: do we continue, change, intensify, stop, etc. "racial uplift" efforts?
I additionally think, as a factual matter, claiming that all efforts to help Black people in the last 50-60 years have failed is a pretty wild and weakly supported take. As far as I know the proportional wealth gap for example stalled out more in the 1980s or so, so 45 years, but your language seems to imply this is a more longstanding. Perhaps a more useful question I have for you then would be, at what point historically do you think things presumably got 'fair enough' that you can say "well we tried and failed so it must be their fault not ours"? I assume that is your actual argument, yes? That we as a society tried and failed at "racial uplift" and no use throwing good money after bad, that kind of thing?
My grandpa told me a story last year about how while he was growing up his dad decided to quit being a realtor because he was so mad at the realtor's association refusing to allow him to sell a house to a nice, well-off Asian couple the house they wanted because of explicit redlining. He was willing to go to bat for them and fight it and they just gave up. Redlining for example only became technically illegal in 1968 and sure as hell didn't magically stop overnight. You know, the main way Americans build generational wealth. Sound familiar? Don't get me wrong, there's sure a limit to assistance, and personally I favor a more, well not entirely race-blind approach, but certainly a more targeted approach that mainly focuses on wealth as it is rather than other groupings.
Have you ever tried looking up achievement scores controlled for both race and income? The results are pretty gruesome across the board. Even if you completely leave aside all but the wealthiest blacks, you're left with a group that can compete with only the very poorest whites, while being crushed outright by even the most poverty-stricken Asians.
From an HBD point of view this is all totally unsurprising, but the other side is perpetually left scrambling for some kind of bullshit mini structural micro systemic racism that makes the family of a wealthy black doctor worse off than that of the Asian immigrant who scrubs their shitter for them. It's horseshit. It's a bad joke, and I don't need to fuck around in the weeds over statistical methodology to call it a joke.
I don't consider a Google Images search to be a well-sourced claim, and even beyond that, averages don't capture the whole picture. Which you might know, if you didn't have a complete disdain for statistical methodology. This isn't the weeds, it's the absolute bedrock basics, which I keep trying to tell you. Do you have an actual source? Do you recognize that your stated claims above don't even match the data you yourself hint at?
I don't consider vague handwaving in the direction of poverty to be a serious argument in the first place. If there were any meaningful population-level cognitive metric where the racial gap could be plausibly reduced to income, or any other factor able to be discussed in polite company, the media would make sure absolutely everyone knew about it and this place would have fifty threads per month on the topic.
If you do happen to be sitting on such game-changing data then feel free to make a top-level post on the subject in this week's thread, as I'm sure everyone will be very much interested, but personally I'm done here.
If not, then don't feel glum. The people who agree with you are the ones actually making policy on the subject, and that's likely to remain the case indefinitely. You'll get to spend the rest of your life talking about how the gap can theoretically be closed.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Not that I disagree, just want to make sure we're on the same page: the way I see it you just doused every single social science with gasoline, and are standing above them with a lit match in your hand. I don't mind getting rid of the HBD hypothesis this way, but I want to make sure that when the whole thing is done, you're not going to try and tell me that that little pile of ashes where "implicit bias" used to stand, is still a valid structure.
Actually a great, great question. In one sense, you are right that social sciences do in fact have some potential, deep-seated issues. Education is particularly thorny among them. As a field, they have figured out some methods to cope, which I will say sadly not all who undertake research fully understand, though the best do. The basic and relevant point here is that for a "measure" to be accepted in the social sciences, there are a few helpful traits we might look for. You might find the wikipedia page on psychological statistics and its cross-references including test validity interesting.
To steal a reddit ELI5:
Most of my arguments here in this thread have to do with some variation of validity. In other words, people are taking a tool and using it for something that it is not designed to do, and in fact incompatible with that use. Unlike physical reality, where you could use a hoe instead of a shovel to dig a hole and it would just be annoying and take more time, in the math realm when you do try to do something a tool fundamentally cannot do, the risks are higher and things not working as expected are also higher. You might, for example, use a shovel that only digs up dirt with the intention to create a hole but in fact leaves clay and gravel behind and the job only half-done, because it turns out the shovel literally only picked up dirt. I hope that's an okay analogy, there's probably a better one. Researchers themselves often grapple with these questions all the time (hopefully literally all of the time, but you know how modern science has its flaws too). To put a simple example, your bread and butter linear regression assumes, as a core assumption, that you've selected all the variables that matter. If you left something out, you get well, not "bad" results exactly, but ones that will possibly not help you accomplish what you want to accomplish, and at worst might send you the wrong direction entirely. See: multicollinearity, confounders, and test design in multiple linear regression for more resources there.
That's not to say social sciences are the worst sciences and are all doomed. If you read many actual papers, it's not uncommon for a paper to specifically examine these arguments in detail. That's why I bring up the Likert scale, which I still do have a handful of issues with, but is relatively common and well-established and for which advances and self-questioning continues. See, for example this paper. IQ by contrast has a much more difficult history and variation in both implementation and interpretation, along with a tendency for misunderstandings. In my personal, subjective opinion, the actual worst field of science is much of food research, but that's a thread for another day.
All the above basically concerns the fundamental misunderstandings about IQ. As evidenced by the many downvotes my posts have accumulated, sadly such fundamental misunderstandings continue. "Why don't I notice a 85 to 100 the same as 100 to 115 intelligence difference" as a question indicates that the question is faulty in at least one way, not that they have somehow discovered some deep principle of life. As to my allusion to fuzzy categories being similarly relevant and something we can't ignore, that's more of a fundamental psychometric and math question, along the lines of "how to lie with statistics". My points there were not questioned so I feel no need to go into detail there.
But that's just the thing, I'm pretty sure they are if you insist on a standard high enough to reject IQ.
Economics is notorious for sacrificing resemblance to reality for the ability to model something mathematically. Accuracy is their only justification for doing so, except you're not supposed to look at it too closely either, because their accuracy sucks. When you point that out their only comeback is "well, it's the best we have".
Psychology isn't as obsessed with mathematical modeling, but they work with variables that are neither easily quantifiable, nor directly measurable. Yes, as you point out they came up with some ways to cope with that, but that doesn't change the fact that it's cope. The fact that there's an objectively right answer to an IQ test will always give you a leg up over "agree / disagree" type tests.
And don't get me started about sociology, somehow it manages to combine the worst of both worlds. How can you tell me with a straight face that I should take their arguments about crime, poverty, education, etc., when you want me to reject one of the strongest effects they discovered based on an esoteric argument about epistemology? These dudes made an entire test supposedly showing you're a secret racist, and wrote a ton of papers around it, and are now trying to say "IQ tests only measure performance on IQ tests"? That's absurd.
I meant it when I painted that visual with you holding the match. I'm fine with you dropping it, and setting the whole thing ablaze, but you're doing exactly what I was expected - pretending that the far less rigorous parts of social sciences will be the ones that survive this.
Isn't that just a Russell Conjugation? I'm advancing as self-questioning continues - you have a difficult history and variation in implementation and interpretation.
I beg your pardon - what?!
You're going to tell me that the only thing we can be sure IQ tests measure is IQ, but down votes measure misunderstanding of IQ?
I'm fine throwing a match on a lot of sociology research, yeah. A lot of it does suck. I don't really appreciate being lumped in with the "secret racist" researcher dude as if I am the same person, and am certainly not defending the entire body of left-wing academic "research" that goes on. Economics is a bit trickier and sort of its own thing I wouldn't quite lump in with other disciplines -- the things observed are numbers, but the web of causality (so to speak) of the real economy is hopelessly entangled and complicated. I don't know if I have enough knowledge and/or exposure to make an opinion on economics, to be honest. And I'm comfortable saying I don't know. My gut at least is inclined to agree with you, though, in that the web is just too complicated for the small nuggets of mathematical mechanisms we get from economics to be very useful, though AFAIK not all economic research activity is oriented along those lines.
I bring up psychology because we are talking about psychology, and it's one that I do know a little more about. Saying the whole field is all based on cope is unfair. There do exist good and meaningful ways to perform psychometrics and apply them to life. And some of them do require some assumptions. And we just need to keep those in mind, at the very least as background knowledge. That's fine! The field as a whole still has some challenges, but for example the replication crisis did lead to some reform in various ways.
"There's an objectively right answer to an IQ test" doesn't even make sense as a sentence, so I'm not sure what to do with that. Fundamentally, a number either represents something, or we claim it does, and it's important to both distinguish these and confront them directly. "Raw IQ" test scores are literally how many questions you got right on a test, though sometimes they have a time component too. Those are direct representations. Generated IQ scores (did you read the linked StackExchange post about how they are created?) directly and fundamentally (zero assumptions) tell you how you rank on the test you took compared to, theoretically, others. Who exactly "others" are is wildly dependent on the exact test used and its methodology. A poorly constructed IQ score might, in theory, not even be comparing you to real people. An IQ score also has some weird math properties. That's one thing I'm trying to hammer home here. A final IQ score is a number with limited utility. The best analogy I have: it's kind of like if you were to take a number, and create a new number from the first, only this new number you are allowed to add/subtract but are not allowed to multiply/divide. It's unintuitive to a lot of people, and they try to "multiply" the number even when they really can't.
If you interpret IQ as intelligence, you've made a massive logical leap. Now. To be fair. Everyone who is honest including me will say that it's certainly IS related to what we consider "intelligence", maybe even pretty closely. But you literally cannot escape this exact "cope" as you call it when you make an equivalency jump. Most people ignore it. But ignoring the issues doesn't mean they don't exist. Take, for example, the user above (not you I know) who literally quotes my point about normalization of 100 being highly, highly variable and necessary to define and, as a direct reply, says something to the effect of "oh people pooh pooh IQ all the time but it's actually great" which betrays a pervasive attitude about IQ that is not grounded in reality and is not intellectually honest. It's like someone says "I gave that movie a 4". I might say, oh, really, is that good or bad, what kind of scale are you using, because some people use a 4-star system, some use a 5-star system, and some use out of 10", and you know what? It IS important to know. Reacting all hyper-defensive to a perfectly factual, relevant point is in my opinion indicative of an unwillingness to confront these fundamental issues.
IQ's history is objectively troubled. The actual rigor is very lacking. As a brief example, the linked adoption study had a big run-in with the Flynn effect, which wasn't accounted for, and even the accounting was difficult and not straightforward and not entirely solvable, as my other link discussed in detail. That's worrying, because the effect is one of those things I was just harping on, where different tests set the "100" center in different places. This, mathematically, is something incredibly fundamental. The fact that major studies could be released that didn't even think about this effect is extremely telling. Tendency for misunderstanding is a more subjective statement but seems to be true. You could argue it when it comes to academia, but in popular imagination, certainly -- look at TikTok conversations about it, look at online tests which you pay for and usually tell you you're smart, look at how few people even know what a normal curve actually is. So again, my issue is with the general attitude that we can do whatever we want with IQ because screw the liberal haters with their heads stuck in the sand. It's not that I think IQ is useless or meaningless but rather that we need to demonstrate a higher level of care other than a hand-wavy "yeah it's intelligence", especially when such a claim is (IMO) rooted in a persecution complex against liberal sociologist-types rather than a factual defense of the metric itself. I hope that makes sense.
I don't think I went quite that far, but I do want to apologize for that, I do have a tendency of getting direction-brained in these conversations.
I think you got carried away with this one. What doesn't make sense here? IQ tests are tests that contain questions, the answers to which are objectively right or wrong. This is in stark contrast to agree / disagree scales to which there isn't an objectively correct answer to. You can posit an abstract state of objective "agreement" of your test subject that your test is meant to measure indirectly, but that's even more ill-defined than "intelligence" and it's measured even more indirectly than intelligence is measured by IQ, because of the "has objectively right answers" property.
Take something like a strength test, barring outright fraud, a researcher cannot change the fact someone was able to generate X newtons of force during a test, because that test is objective. Similarly with IQ tests, you can't hide the fact that someone was able to find the correct pattern that the test asked about, or correctly performed an arithmetic task. This is very important, especially when studying politically fraught question, because most researchers will prefer to find what aligns with their political views, rather than what reality tells them.
Literally none of that matters if you're only comparing people within the group you selected for the study.
And what if you don't make an equivalency jump, but just say "I don't care"? It feels like splitting hairs over whether dead-lifting is an actual measure of "strength". Very similar arguments could be made for answering in the negative, but the that doesn't seem to change the fact that the test is more than adequate to answer some very interesting questions, in a far more rigorous manner than almost anything else coming out of psychology.
Yeah and this applies to the rest of psychology as well. What you actually have to argue that it's worse in case of IQ research than it is in case of other parts of psychology. I'm not a IQ-research connoisseur, but back when we still had those hanging around here, every single conversation I've seen was them making the exact same arguments you are, when defending the rest of psychology: look at this esoteric methodology paper talking about latent variables. Or: Yeah, that was an issue in the past, but we fixed it. This is why I'm yet to hear an argument burying IQ research that doesn't bury the entire field of psychology.
IQ tests sometimes contain questions with multiple right answers, or multiple logical approaches, though some believe that the better-designed ones eliminate these, while others think that approaching a problem from multiple angles is a better judge of intelligence than doing the right answer fast, or simply knowing the format of the test. And that's a persistent problem in many of these tests, where even a passing familiarity with the type of question asked often affects performance. Like, if you grew up in a household with certain puzzle books, which not all people do, your brain has already been primed to perform a few of these tasks, and will spend less time on processing overhead like trying to understand the instructions. Which, by the way, oh yeah, many IQ tests also time you and factor this into their final scores. This factoring already introduces a designer-specific subjective judgement regarding the relative weight of time vs accuracy, which is yet another reason why the "right and wrong answer" paradigm is not quite accurate. Like, if you had to choose between taking 8 seconds on a task and doing it right the first time, 10 seconds at a task but tried 2 approaches in that time, 12 seconds on a task but being extra sure you are correct, and 4 extra seconds to understand the question but 6 seconds to complete the task, which one is the most intelligent child? Maybe some of these average out across many questions, and you still get a measure that's "pretty good", but did you miss something? Yeah, maybe so! You miss more if you weren't very careful in creating the question and didn't test it. But wait! You focus-group tested the question and it was fine, but how about the focus group's representativeness? Okay, maybe that last one I'm being pedantic and petty, but just wanted to illustrate how small biases can stack up if you aren't careful.
Another common example of bias for critics of IQ tests is how verbal sections often contain things that rely on specific word exposure, or can be contorted by vocabulary size. You grew up in a house where everyone talks like they are 18, vs you had college professor parents, your vocabulary exposure is radically different. Are questions that lean on that really a measure of innate intelligence then? Some people say yes, but others say (especially if your desired interpretation is about genetics) no -- a classic case of where design assumptions of the test itself can only show up much, much later, and someone might dismiss as cosmetic and irrelevant but is actually a fundamental failing. It's been a while since I took a specific factual look at some of the industry-leading tests, but this kind of thing was true for a long time. Maybe the field has advanced a lot?
With that said, you're absolutely correct that you can still rank participants within a study using IQ scores, but I want to really, really drive this home: the distance between the subjects even in the same test is not predictable nor uniform nor interpretable without stating explicitly the assumptions, and many times the assumption step is skipped. So if you give 10 kids an IQ test and the final scores are ranked (let's ignore for now any relevant objections above), let's say without a time component, then yes the IQ test will rank the students and you're fine. But if you use your normal-transformation technique, and get one child at 85 and one at 100, you can't make the interpretation about about how big that gap is without considering how the test normalization process was done. It's not a gap between the students, because the test wasn't calibrated on them. If for example the test was calibrated on Chicago high school students, then your difference is one standard deviation in the normalized test results of a Chicago high school student. If it's one standard deviation worth from a theoretical population that you, um, forced to be perfectly normal in the first place... well, maybe you can see the issue that pops up, the math equivalent of begging the question. I hope this answers your point about comparing people within the group. Yes, you can compare, but the issue in this particular case is one of interpretation, which is what we in a thread like this are usually interested in.
I will grant you that in probably many cases, this is probably not going to make a massive, massive difference. But there will be a difference, and your test's rigor will be degraded, and if you make a habit of hand-waving these things away, at some point you actually will have a mountain of waived assumptions that will come back and bite you. Plus some of these logical leaps can actually torpedo entire analysis, they aren't always small effects. My perhaps wrong general impression of the IQ testing landscape does lend me the impression that hand-waving is a professional pasttime. It's possible the IQ testing companies and researchers have carefully considered all of these issues in detail. I will confess I'm not all that motivated to look and find the super rigorous approaches because as described elsewhere I do think that the whole concept is both unnecessary and more the equivalent of smart people masturbating to their own superiority more than a genuine desire to understand how intelligence works, but that is more about my overall perception of the overall state of rigor in the field. It's not about the concerns I raised, because those are actually very potent concerns that come up often and need to be addressed.
Like just as a brief look, I guess the WAIS-IV is popular? They seem to have tweaked it so there a few sub-index scores, there's still perhaps a bit of test bias though extent is debated, they did a better job indexing and calibrating the test, but also put a strong speed emphasis on some sections which as I mentioned is a big hang-up for some. Some concerns like the multiple-answer and verbal paradigms I can't evaluate quickly. And, worst of all, who makes and develops the test? Pearson. Fuck. Potentially, a whole lot of issues there. Did you know that one company peddled a screening test (actually used in many schools!) that a major study found could only determine whether a child was a grade level reader or not only 3ish percent better than a COIN FLIP? Still mad about that one. This podcast miniseries about that whole debacle was super interesting that you might find a fun listen.
I consider my own critique as one more based in methodology and "remembering the fundamentals" than a takedown of the whole field of psych, but if you see it as easily becoming such, I don't think that's too much of a stretch, so I get that and would respect it. Personally, I think it's more the field of education specifically that is pretty piss-poor. Enjoyed our convo for sure! I guess my ideal scenario is to get people to assign much lower weight to IQ scores and be more careful when using the info, or more broadly, making sure we're using and evaluating statistics more contextually, remaining aware of the fundamental limitations. Like for HBD stuff, if I were to boil it down and exaggerate for effect a bit, the problems in the paradigm of design racist test -> get racist results -> the poorly performing races must be dumber is one that needs a bit of consideration, rather than jumping straight to "why are you doubting all my fancy numbers". Or not even bringing race into it, if a hypothetical bad test rewards general knowledge a lot over some "pure skill" kind of questions, and then we use it immediately to gauge genetic stuff with "pure skill" connotations, we shouldn't be surprised pikachu face that we get weird results, or even if we get normal expected results, we have to go for fuck's sake we weren't even testing for that so the interpretation doesn't work. Anyways, rant over. Cheers :)
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link