site banner

Culture War Roundup for the week of August 26, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

6
Jump in the discussion.

No email address required.

I have an internal feeling of justice that calls for extremely severe penalties for these people. I guess I'm in the minority, since it doesn't happen.

Samesies.

I say slash and burn, take their money away, give them humiliating tattoos and make them work at McDonalds somewhere far away from all their friends, or worse. Normal criminals couldn't do that much harm in a lifetime.

I'm not saying to impose the death penalty on the guy.

But i'm not not saying it.

What always impresses me is how the system seems to have evolved into such a highly polished and lubricated machine that you can sling blame all you like, it won't stick to any individual component.

Almost everyone in the chain of decisions that led to the outcome can just say "Well its not MY fault, I was just relying on [other link in chain], which is what the best practices say!"

Maybe even the guy who produced the fraudulent research can say "I was relying on inexperienced lab assistants/undergraduates who produced faulty data!" I don't know.

But there has to be some method of accountability. Like you say:

However, when it comes to mechanical engineering, we've learned to build bridges that stay up.

The (apocryphal) story about Roman Architects being required to sleep under bridges or arches they built is on point here. Bridges stay up (except when they don't) because there's a close enough loop between the decisionmaker and the consequences for failure. It maybe doesn't have to be quite as tight as "you must be directly harmed if your decisions harm others" like with the bridge story, but it has to be make them afraid, on some level, of being punished if they screw up.

I'm not entirely sure how to bring the consequences for screwing with academic research into medical treatments into a tight loop. One might hope it would be enough to say "If you ever end up in a hospital needing treatment, YOUR RESEARCH is going to be used to treat you." And thus they should have some concern about getting it right. But that's a very distant, uncertain threat. What would be a (proportional) threat you could make to bring down punishment on them the very instant the misconduct is uncovered? And how can you trust the entity that claims to have uncovered misconduct?

Prediction Markets offer one way to put more skin in the game, but it doesn't quite satisfy me that it would be a significant deterrent for others attempting fraudulent research.

And if we set up some organization whose specific objective was punishing those whose academic fraud causes harm to others, that institution would simply become another target for capture by vested interests. I think it has to be something a bit more 'organic.'

It's unfortunate that our society so fully understands the necessity of this in some contexts, yet seems ignorant of it in others. We take a strong, appropriate stance on cases of financial fraud - witness SBFs 25 year sentence, or Madoff's effective life sentence. Yet in science and medicine we seem to let fraudsters play in a fake world with no consequences to their actions.

Perhaps it's simply an issue of legibility: it's easy to measure when money goes missing, but when studies fail to replicate and medicines fail to work, there are so very many explanations other than, "that man lied".

Great point.

Its also the fact that those financial fraudsters immensely benefit financially from their crimes. We can measure the benefit they got for causing harm to others, too.

As far as I know, most academic fraudsters, ironically, don't become fabulously wealthy, but may gain a lot of status and acclaim.

So it both makes it even less sensible why they'd commit fraud, and harder to articulate the nature of the harm. As you say, "that man lied, and as a result got dozens of speaking slots at conferences and became the envy of graduate students in his field of study, and was able to afford an upper-middle-class lifestyle" doesn't seem as legible as "that man lied and made $80 million."

Maybe even the guy who produced the fraudulent research can say "I was relying on inexperienced lab assistants/undergraduates who produced faulty data!" I don't know.

He does say that. In general, who 'produces' a given piece of research is very difficult to nail down, because the head of the lab (the guy in the article) hasn't done labwork for twenty / thirty years and the work will have been done by many PhD students who come and go within four years. The recipients of Nobel prizes have often done none of the physical work that produced the result, or the analysis, or the write-up. Sometimes they didn't even come up with the theory.

Sorry, I think I'm spamming this thread, but it's a topic close to my heart.

I think the point of having a principal investigator, is that he is aware of what is going on.

If they are not in the loop of the research process, they is no point for them to be on the paper and they are just academic rent-seekers.

Granted, at some level, you have to trust in the non-maliciousness of your grad students. If a smart and highly capable PhD candidate decides to subtly massage their data, that could be difficult to impossible to catch by their supervisor. The way to avoid that is not to incentivize faking data (e.g. no "you need to find my pet signal to graduate"). The PhDs who would fake data because they are lazy are more easily caught, producing convincing fake data is not easy.

Of course, in this case, we are not talking about terabytes of binary data in very inconvenient formats, but about 170 patients. Personally, I find it highly unlikely that the graduate student found that data by happenstance, and his supervisor was willing to let them analyse it without caring for the pedigree of the data at all. I think the story that he provided the data in the first place, years after it was curated by another grad student whose work he did not check is more likely.

The recipients of Nobel prizes have often done none of the physical work that produced the result, or the analysis, or the write-up. Sometimes they didn't even come up with the theory.

In my field, physics, I don't generally feel that is the case. For one thing, people tend to get their Nobels much later than their discoveries. From my reading of wikipedia, when Higgs (along with a few other groups) published his paper on the Higgs mechanism, he was about ~35 and had just had his PhD for a decade, and a job as a Lecturer (no idea if this implies full tenure) for four years. Not exactly the archetype of a highly decorated senior researcher whose gets carried by tons of grad students towards his Nobel.

and a job as a Lecturer (no idea if this implies full tenure) for four years

In the traditional British system of academic titles, "lecturer" is the lowest of four grades of permanent academic staff (lecturer/senior lecturer/reader/professor) which loosely correspond to the tenure track in the American system. American-style tenure doesn't exist, because all UK employees benefit from protection against unfair dismissal after two years full-time work on a permanent contract. Taking 14 years to be promoted from lecturer to reader (per Wikipedia) was quite normal at the time for academics who were not seen as superstars by their colleagues.

So if we are going to draw a direct equivalent to the US system, Higgs was 4 years into his first tenure-track job when he published his Nobel paper, but the importance of the paper wasn't recognised for another decade+.

I will caveat that this defense doesn't seem particularly workable, here. As JamesClaims points out, "the problems with the original DECREASE study were reasonably straightforward to detect." Some of the testimony in the final report_also_ points to errors that would have been hard to spot without access to the underlying data (though "The principal investigator checked the work of the PhD candidate at random and never noticed anything unusual." seems... pretty clearly just a lie?), but this summary is a lot more to the heart of things.

These are results that, just from the final papers themselves, range from wildly implausible statistics to GRIM errors to confusing entirely different drugs. This is the sort of thing that should have resulted in deeper scrutiny.

I'm not quite willing to sign onto JamesClaims' "strict liability" approach yet, but I don't think you need to in order to look at this one and suspect either wilful blindness at best by the principal authors.

No, you've added VERY useful context!

And this is what I mean. If no one person's neck is on the line for a screwup, then its not surprising they'll just passively approve whatever the underlings scrounge up. And not question the incentitves of said underlings.

It makes me really annoyed because I work in a small office with assistants who handle a lot of work and I am the one who signs off on everything at the end of the day so I am the one eating crow/falling on swords if there is a serious screwup.

I just want to believe that other people take their jobs and the accuracy of their output half as serious as I do!