site banner

Culture War Roundup for the week of October 30, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

8
Jump in the discussion.

No email address required.

I don't have (and don't have the expertise to make) a biology textbook example written up, but I do have a pretty serious nuclear science one written up for you, where a man with no serious bias and far greater technical knowledge than I can legally get still has bizarre faults that I can verify. (If Atomic Accidents doesn't count as textbook, Radium should, and probably is the source of one of Mahaffey's miscites.)

In one sense, you're not wrong -- if you scrawled down every single fact mentioned in the book onto sticky notes, and put the ones that were strictly true on one side of the scale and the ones that were strictly false on the other, the scales would lean toward truth. But if you were looking at ones that were core claims for the book's theme, and then separated your sticky notes into those you can confirm and those you can't... truth might still win out, but it's not be lopsided victory.

And this gets worse the less concrete the topic. The problems for psychology are legendary (and often-hilarious), and even the strictest focus on textbooks have pretty sizable faults. Nutrition science is a joke, both in academia and on wikipedia. We've hit the point where the literal definitions of words get redefined for have their definition updates streamlines for political relevance.

It also gets bad where a lot of the 'experts' aren't. That's most obvious for the political stuff, where the same people who bash randos for 'doing their own research' will take a long stiff drink of water before opining on topics where their expertise is limited to having gone to journalism school and slammed too much alcohol down there. Yet there are fields where the experts and textbook writers are plain liars and no one cares because it's not going to end up on the news even if it does end up on television, or leverage training in entirely one field as expertise for an entirely different one.

But even for material science, as concrete as can be, there's a lot of stuff that's just a mess; not that anyone's lying, but that they genuinely don't know. I've got an essay I've been working on, but the punchline revolves around this stuff. It's been around a decade, has no wikipedia page, won't be in any textbooks, and is subject to literally millions of dollars in grants for analysis of a material that can be mixed in a garage and I don't know if it 'reals' or not, or what extent that the claims benefits are of the claimed magnitude. There's a lot of red flags in all nano-whatsis stuff but especially the stuff around it, but it's weird for the DoE and Argonne and a dozen other labs to be looking at it seriously. But they've also all been looking at it seriously and not publishing a ton, for a material that you'd expect to see in cars and boats and household electronics.

The problem isn't an lackluster number of people I could point to who could be more serious investigators of this than myself. It's that they don't have an answer, or to the extent they do they have a half-dozen different ones.

Yet there are fields where the experts and textbook writers are plain liars and no one cares because it's not going to end up on the news even if it does end up on television

The occasional news item about a fraudulent researcher just reinforces the idea that scientific malpractice consists of a tiny number of evil researchers who clearly violate scientific standards by fabricating data and that all other researchers do a great job.

In reality, most bad science consists of fairly subtle manipulations or bad practices like p-hacking, tiny data sets, misrepresenting the actual findings, measuring the wrong thing, etc. Much of this is due to incompetence, where the researchers get taught 'this one weird trick' which is good enough to get their papers accepted, but without actually understanding what the strengths and weaknesses of their method(s).

This incompetence is fueled by the scientific reward system rewarding those that do bad science and punishing those who do good science (limited by the ability to get away with BS, which is why fields like physics are a lot better, because engineers and the companies that employ them call out scientists when they can't make working things that are based on the scientific discoveries).

Convincing people whose worldview is based on trust that our elites take good care of us, based on mostly solid science, that science is fundamentally broken and most money spend on it is wasted, is quite hard though.

The occasional news item about a fraudulent researcher just reinforces the idea that scientific malpractice consists of a tiny number of evil researchers who clearly violate scientific standards by fabricating data and that all other researchers do a great job.

Perhaps, if every fraudulent researcher found themselves as news items, or if academic research appropriately handled the well-established cases as they're discovered.

This is an old post, but I'll highlight it for three reasons: you've never heard of the researcher (and I'd never heard of the entire university), it hit a 'hard' science field, and (most unusually) the publisher explicitly and publicly said they weren't going to treat fraud as fraud where usually that's just decided privately.

I can only demonstrate clear intentional fault in a small portion of all papers and don't know how prevalent this is, that's fair. No one knows how prevalent this is. Attempts to discover research fraud occur almost entirely at the hobbyist level, and the people trying to catch that overt fraud are dependent on tells like photoshop goofups or division errors, with only rare opportunities to see the raw data. Ariely's only coming to light after failed replications, followed by the man sending over Excel spreadsheets with the fakest data imaginable. There's basically zero institutional interest in discussing even the highest-profile and most explicit fraudsters. It's not that we're only seeing the crashes; we're only seeing the crashes that happen in the middle of the city, after which the pilot steps out of the aircraft and recites a five-stanza poem about how they mismanaged the flight. We don't know if the fraud is extremely rare or it's as common as the bad-but-not-fraudulent science.

I agree with and recognize that a lot of people have been trained to do bad-but-not-fraudulent science. I'll caveat that this division isn't always so cut-and-dry -- Wansink is my go-to for salami-slicing, but there's some evidence at least a couple of his studies depended on fabricated data rather than 'just' p-hacking -- but it is relevant to keep in mind.

As I perhaps mention too often, you can come at it from the other end as well. The scientific method, as commonly understood, should preclude the creation and maintenance of entirely fictitious fields of study. If such fields can be observed to exist, that's strong evidence that the process as a whole is fundamentally broken, even if you can't identify the specific steps where the problem lies.

The scientific method, as commonly understood, should preclude the creation and maintenance of entirely fictitious fields of study.

There is actually no common understanding of the scientific method. What Judith Butler does is not comparable in any meaningful way to the work of Ferenc Krausz (and his team). Yet they both claim to do science, even though the work of one of them is not falsifiable.

Part of the charade that allows these nonsensical fields and subfields to exist is the claim that all the professors who work at universities do proper science, even when they do no such thing.

And the ideal scientific method is just aspirational anyway. In reality we cannot achieve that perfection even for physics. When it comes to fields that do not provide the preconditions that allow us to apply something a bit close to the ideal scientific method, people simply use less rigorous methods. Until you get to Judith Butler where claims just get conjured up with bad logic, misrepresentations of what others actually proved, etc.

And like gattsuru says, there is a disturbing lack of interest by institutions (and voters) in even figuring out how well those who call themselves scientists actually do their jobs. Researchers with good morals who do look into it, invariably find highly disturbing results.