This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
Short summary (a scientist erred/falsified results in heart disease treatments, up to 800,000 died):
Full Vox link
I find the Vox article somewhat disturbing. They spend most of the article talking about whether criminalization is the answer. 800,000 dead, or some number in the high thousands and they feel it's necessary to spend so much time justifying and proposing? Why should they be carefully peeping their heads over the parapet, wary of sniper fire? If ever there was someone to cancel and demonize, it's this guy.
I have an internal feeling of justice that calls for extremely severe penalties for these people. I guess I'm in the minority, since it doesn't happen. The EcoHealth gang, Daszak and the Bat Lady of Wuhan are still living the high life. Meanwhile, scientists who dare to have sex with coworkers get their lives derailed.
I suppose that most people have their feelings of justice heavily weighted towards direct things like killing with knives, selling faulty goods or being mean. That makes sense, we didn't evolve to care about the probabilistic harms caused by institutional malpractice over many years. This is why I think we should have extra-strong prohibitions on this kind of non-obvious harm. Even a hardened EcoHealth researcher might have qualms about massacring 10-20 million people with guns and blades. It's a lot easier to do exciting, fun research and be a little slack on all those tedious safety checks. It doesn't feel so wrong, which is why they need to feel fear to counter it.
In the past I've made this sort of argument and been rebuffed by some people on the grounds that if we imposed very severe punishments then people would just double down on lying and blaming others to escape liability. Plus it would disincentivize people from taking up important roles.
However, when it comes to mechanical engineering, we've learned to build bridges that stay up. We appreciate that some kind of consequence should fall upon you if you adulterate food with plastic or replace the concrete with cardboard (or cardboard derivatives). Back in the early Industrial Revolution nobody particularly cared about safety, there were plenty of bridge failures. We slowly had to evolve systems that corrected these problems but we got there in the end.
Indeed, negligence is a big part of law. Mostly it works on the assumption that the harm-causing party is a big corporation or someone with lots of money. From a broad evolutionary point of view, that makes a lot of sense. Proving guilt and getting to the bottom of things takes a lot of effort, you want to be sure that there will be a pay-off. It's like how creatures might evolve fangs to pierce flesh and get at that juicy meat. Entities that can cause lots of harm tend to have lots of resources.
However, academia gives us cases where there are no clear, direct, short-term links between the cause of harm and the victims. The cause of harm might be a few moderately well off scientists. The harm itself might be hazy, there might be no ironclad proof of the magnitude and exact nature. Think how long it took to prove that cigarettes caused cancer. We had the statistical proof long before the exact causal mechanism was ironed out and the costs of delay were phenomenal. Biology is the most obvious case where this happens. There was another case where Alzheimer's research was thought to be fraudulent, wasting many years and billions of dollars. I say slash and burn, take their money away, give them humiliating tattoos and make them work at McDonalds somewhere far away from all their friends, or worse. Normal criminals couldn't do that much harm in a lifetime.
AI likely falls into the same category, though it can probably be dealt with via more traditional negligence systems since it's mostly advanced by big companies. I am worried that it will take far too long for people to realize the danger posed by AI or those who wield them, there isn't enough time to develop seriousness.
Anyway, I think it would be wise to develop ways to target and severely punish biologists who fraudulently or negligently allow harm (perhaps also praising and granting boons to those who uncover their fraud). This would be a positive incentive for singularitarian scenarios and virtuous in itself. We need to get out of the mindset of waiting for our market-Darwinist-legal system to fix things and attack problems pre-emptively. Or at least with a minimum of megadeaths.
I went and looked at the POISE 2008 trial and okay, it does look pretty convincing, but that one didn't actually filter for cardiac surgery. Without that study the results look a lot more balanced. Am I missing something here? It seems this is barely a meta-analysis, it's just "POISE and some others".
More options
Context Copy link
Also discussed on the ACX open thread here.
Crossposting my take:
Analogy time. Someone posts on twitter about the health benefits of drinking mercury. Millions follow them, the FDA starts to recommend a daily intake of 1g Hg. After a few years, someone happens to notice that a lot of people die of mercury poisoning.
Would it be fair to say that this twitter user has killed millions?
I would say perhaps, but there is clearly more blame to go around. Why would the FDA trust what a random person on twitter says, that is grossly irresponsible. Why did nobody notice all the bodies piling up?
Now, some people might claim there is a difference between trusting a random tweet and trusting a peer reviewed medical study, but in my mind, there is not -- only a complete fool would do either. At least do a meta-analysis of five studies done by different institutions (this leads to https://pbs.twimg.com/media/Dml8wLEUUAASwZi.jpg but still seems like the least worst option). I mean, if Scott had written an article "Beta blockers before surgery: much more than you wanted to know", I would not have expected him to say "well, this guy sure publishes a lot of studies in favor of them, so I guess they are fine".
Also, if the new clinical guidelines based on the fraudulent study lead to a fucking 27% of excess mortality, there should be someone whose fucking job it is to notice that fact.
In a way, this feels like if Boeing decided to base their flight controls on a Windows 95 platform, and blame Microsoft for the resulting computer+plane crashes. It is fine to say that Microsoft is to blame because Win95 was obviously not fit for sale, but the bulk of error was to decide to control an airplane with it, so most of the blame would depend on the specifics: did MS actively push Fly-By-Win or did they not?
I was thinking the same thing. It should also be noted that the policy was enacted in 2009, while the debunking meta-study uses studies from 1996-2008. Someone was asleep at the wheel.
More options
Context Copy link
More options
Context Copy link
Remember the scientists convicted of manslaughter for earthquake predictions? If you severely punish scientists for harming someone, you're going to get tons of cases like this. A lot of scientists don't have political connections and therefore are easy scapegoats. Don't think "well, we could have been able to catch these scientists who really were responsible for lives" but rather "what else would we be enabling, by making it easier to catch these scientists?" (Yes, they were exonerated later, but the point still stands.)
Also, pretty much anything you do on a large scale involves lives. Approve a drug a little late and lives are lost if people couldn't get the drug. Approve a drug a little early and lives may be lost to side effects or displacing better drugs from the market. Support cars that run on fossil fuels and get dinged for all the lives lost to pollution or global warming. If you punish scientists for things that they do on a large scale that cost lives, you will no longer have scientists, because everything on that scale costs lives if you do it wrong and no human is 100% perfect.
Human wisdom is surely capable of distinguishing between imperfection, negligence and fraud.
For instance - software is released that's buggy = imperfection. Crowdstrike bricks tens of millions of computers = negligence.
No it isn't. That's why I gave that example.
There's also the question of malicious actors. You not only need the ability to distinguish between those, you need the ability to distinguish between those when faced by a hostile actor who is deliberately blurring them. You may know what negligence is, but if some politician looking for a scapegoat pointed to imperfection and said "that's negligence", would you be able to prove the politician wrong?
Well if human wisdom is so hard to find, why don't we torch the whole legal system? It has been misused by bad actors from time to time, I think we could both find examples of this.
The cost of not having a legal system (anarchy) is greater than the cost of having a legal system. I suspect that the cost of introducing more rigour to high-impact academic research would be much less than the anarchy we have today and its associated megadeaths.
There are systems in place that prevent politicians from calling each other foreign traitors, paedophiles and fraudsters and then having everyone credulously believe them, guaranteeing their victory. Human wisdom is first and foremost amongst them.
As a lawyer, much of it is in need of torching, or at least disassembly and reconstruction.
More options
Context Copy link
The "system" that prevents this is the "everyone" part. A politician who calls a scientist a fraudster under your system doesn't have to convince everyone--he just needs to convince the police and a judge.
If you can convince the police and the judge, you can already have someone whisked off to prison or shot dead.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Everything also costs lives if you do it right. We just try to balance the net costs and benefits of things and try as best we can to figure out which option gets us in the green.
More options
Context Copy link
More options
Context Copy link
Prosecuting fraud will encourage even more scientific groupthink. The only results that will ever get much scrutiny for fraud are those that seem exceptional; someone manufacturing data that supports whatever the consensus is on Topic may hypothetically be subject to prosecution, but they'll rarely be prosecuted. Publish something questioning it, though, and you'll have lots of enemies digging through your work to find something that could be construed as fraud to get rid of you permanently.
You might say, just don't commit fraud, and you'll be safe. But that's not even true: if you're a PI, you rely on lots of different people. And it's often impossible for someone external to know who is responsible and who is passing a buck. And even when there isn't fraud, a simple mistake can be construed as fraud if your enemies are sufficiently motivated.
More options
Context Copy link
Samesies.
I'm not saying to impose the death penalty on the guy.
But i'm not not saying it.
What always impresses me is how the system seems to have evolved into such a highly polished and lubricated machine that you can sling blame all you like, it won't stick to any individual component.
Almost everyone in the chain of decisions that led to the outcome can just say "Well its not MY fault, I was just relying on [other link in chain], which is what the best practices say!"
Maybe even the guy who produced the fraudulent research can say "I was relying on inexperienced lab assistants/undergraduates who produced faulty data!" I don't know.
But there has to be some method of accountability. Like you say:
The (apocryphal) story about Roman Architects being required to sleep under bridges or arches they built is on point here. Bridges stay up (except when they don't) because there's a close enough loop between the decisionmaker and the consequences for failure. It maybe doesn't have to be quite as tight as "you must be directly harmed if your decisions harm others" like with the bridge story, but it has to be make them afraid, on some level, of being punished if they screw up.
I'm not entirely sure how to bring the consequences for screwing with academic research into medical treatments into a tight loop. One might hope it would be enough to say "If you ever end up in a hospital needing treatment, YOUR RESEARCH is going to be used to treat you." And thus they should have some concern about getting it right. But that's a very distant, uncertain threat. What would be a (proportional) threat you could make to bring down punishment on them the very instant the misconduct is uncovered? And how can you trust the entity that claims to have uncovered misconduct?
Prediction Markets offer one way to put more skin in the game, but it doesn't quite satisfy me that it would be a significant deterrent for others attempting fraudulent research.
And if we set up some organization whose specific objective was punishing those whose academic fraud causes harm to others, that institution would simply become another target for capture by vested interests. I think it has to be something a bit more 'organic.'
It's unfortunate that our society so fully understands the necessity of this in some contexts, yet seems ignorant of it in others. We take a strong, appropriate stance on cases of financial fraud - witness SBFs 25 year sentence, or Madoff's effective life sentence. Yet in science and medicine we seem to let fraudsters play in a fake world with no consequences to their actions.
Perhaps it's simply an issue of legibility: it's easy to measure when money goes missing, but when studies fail to replicate and medicines fail to work, there are so very many explanations other than, "that man lied".
Great point.
Its also the fact that those financial fraudsters immensely benefit financially from their crimes. We can measure the benefit they got for causing harm to others, too.
As far as I know, most academic fraudsters, ironically, don't become fabulously wealthy, but may gain a lot of status and acclaim.
So it both makes it even less sensible why they'd commit fraud, and harder to articulate the nature of the harm. As you say, "that man lied, and as a result got dozens of speaking slots at conferences and became the envy of graduate students in his field of study, and was able to afford an upper-middle-class lifestyle" doesn't seem as legible as "that man lied and made $80 million."
More options
Context Copy link
More options
Context Copy link
He does say that. In general, who 'produces' a given piece of research is very difficult to nail down, because the head of the lab (the guy in the article) hasn't done labwork for twenty / thirty years and the work will have been done by many PhD students who come and go within four years. The recipients of Nobel prizes have often done none of the physical work that produced the result, or the analysis, or the write-up. Sometimes they didn't even come up with the theory.
Sorry, I think I'm spamming this thread, but it's a topic close to my heart.
I think the point of having a principal investigator, is that he is aware of what is going on.
If they are not in the loop of the research process, they is no point for them to be on the paper and they are just academic rent-seekers.
Granted, at some level, you have to trust in the non-maliciousness of your grad students. If a smart and highly capable PhD candidate decides to subtly massage their data, that could be difficult to impossible to catch by their supervisor. The way to avoid that is not to incentivize faking data (e.g. no "you need to find my pet signal to graduate"). The PhDs who would fake data because they are lazy are more easily caught, producing convincing fake data is not easy.
Of course, in this case, we are not talking about terabytes of binary data in very inconvenient formats, but about 170 patients. Personally, I find it highly unlikely that the graduate student found that data by happenstance, and his supervisor was willing to let them analyse it without caring for the pedigree of the data at all. I think the story that he provided the data in the first place, years after it was curated by another grad student whose work he did not check is more likely.
In my field, physics, I don't generally feel that is the case. For one thing, people tend to get their Nobels much later than their discoveries. From my reading of wikipedia, when Higgs (along with a few other groups) published his paper on the Higgs mechanism, he was about ~35 and had just had his PhD for a decade, and a job as a Lecturer (no idea if this implies full tenure) for four years. Not exactly the archetype of a highly decorated senior researcher whose gets carried by tons of grad students towards his Nobel.
In the traditional British system of academic titles, "lecturer" is the lowest of four grades of permanent academic staff (lecturer/senior lecturer/reader/professor) which loosely correspond to the tenure track in the American system. American-style tenure doesn't exist, because all UK employees benefit from protection against unfair dismissal after two years full-time work on a permanent contract. Taking 14 years to be promoted from lecturer to reader (per Wikipedia) was quite normal at the time for academics who were not seen as superstars by their colleagues.
So if we are going to draw a direct equivalent to the US system, Higgs was 4 years into his first tenure-track job when he published his Nobel paper, but the importance of the paper wasn't recognised for another decade+.
More options
Context Copy link
More options
Context Copy link
I will caveat that this defense doesn't seem particularly workable, here. As JamesClaims points out, "the problems with the original DECREASE study were reasonably straightforward to detect." Some of the testimony in the final report_also_ points to errors that would have been hard to spot without access to the underlying data (though "The principal investigator checked the work of the PhD candidate at random and never noticed anything unusual." seems... pretty clearly just a lie?), but this summary is a lot more to the heart of things.
These are results that, just from the final papers themselves, range from wildly implausible statistics to GRIM errors to confusing entirely different drugs. This is the sort of thing that should have resulted in deeper scrutiny.
I'm not quite willing to sign onto JamesClaims' "strict liability" approach yet, but I don't think you need to in order to look at this one and suspect either wilful blindness at best by the principal authors.
More options
Context Copy link
No, you've added VERY useful context!
And this is what I mean. If no one person's neck is on the line for a screwup, then its not surprising they'll just passively approve whatever the underlings scrounge up. And not question the incentitves of said underlings.
It makes me really annoyed because I work in a small office with assistants who handle a lot of work and I am the one who signs off on everything at the end of the day so I am the one eating crow/falling on swords if there is a serious screwup.
I just want to believe that other people take their jobs and the accuracy of their output half as serious as I do!
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I wish there were a study which looked into the personality and religious beliefs of scientific fraudsters. It doesn’t appear to be linked to race, culture, or country of origin, but I can easily see religious belief being correlated with lower levels of fraud because it’s all about intrinsic moral motivation and identity. And if that’s so, our institutions should require religious belief in a personal diety for high-level positions which require trust, without favoritism toward any one system of belief or denomination.
Institutions which require belief in God, but not a specific belief, are uniquely vulnerable.
Like, the Knights of Columbus and the LDS church are some of the more based large organizations with fairly tight control over membership- and requiring members to specifically practice a particular religion is part of it. The LDS church would not function on the basis of ‘anyone who believes in Jesus and tithes’. The knights of Columbus would turn into a progressive skin suit if they dropped the specificity of religious practice requirements.
At best, your proposal sounds like communist party membership- checking a box to say you did it, with no real effect on actual beliefs.
The LDS Church is notoriously full of affinity frauds and MLMs. When arranging my babymoon, I remember wondering why hotel room rates were so high in SLC in the summer, and discovering when we got into town that there were two large MLM conventions on at the same time. Sitting at one table in the brunch room at the Grand American Hotel planning a hike while at the next table over a Mormon with a blinged-out name badge was explaining to two evangelical churchladies with rather less blinged-out name badges how they could convince their church members that God wanted them to buy MLM crap was an eye-opening experience.
More options
Context Copy link
More options
Context Copy link
That’s what the Masons tried… and ended up accused of all sorts of evils.
That’s what the Boy Scouts tried… and ended up a skinsuit for the egregore.
More options
Context Copy link
Do you expect it to be different from any other crime? That is, well-surveyed but completely unable to suss out causation. The main predictor is going to be material cost/benefit analysis, not internal experience of morality.
Speaking of confusing correlation and causation—requiring leaders to publicly profess belief in something unprovable, immeasurable, and also fictional would not encourage trust.
There are large differences between types of crime. Fabricating science is not a crime of passion. I don’t think it is impossible to suss out causation. An easy way to determine causation is by analyzing adopted twin data according to religiosity and family income and so on.
We have cases of people who abstain from a crime despite the opportunity. Personal religion involves costs and benefits that are salient and compelling, some of which are non-material and some are material-to-be.
We do not require students applying to university to tell us what their teachers say about them, we require the teachers to tell us.
More options
Context Copy link
More options
Context Copy link
This ignores the potential for either favoritism or lack of oversight towards people who are in the same church. After all, our church has the best people. /s
More options
Context Copy link
Loads of fraudsters have intrinsic moral motivation and identity. There are people who fake for money and fame, yes, but lots who fake because they believe they're RIGHT and they think they're the chosen one who will finally solve X. The fact that the data doesn't support X is just a temporary setback - the next experiment will surely prove they were right all along, but right now they need to bolster it a bit.
Sophisticated religious systems include humility-generating training regimens involving introspection and self-judgment, whether that be confessions or something more rigorous like the Ignatian Examen.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
As some background for people, scientific labs usually have a three-level structure:
Looking over the academic investigation, it seems like a classic fudge: the PI says the data was compiled by a graduate student who never formalised it and took the original creation methodology with them when they left (entirely possible). The writer (a graduate student) is in hiding and refused to talk to the committee but gave various unconvincing reasons in writing why the false database can't be theirs (e.g. their database was a different file format). A bunch of graduate students were involved with the project, and their papers also seem to be dodgy in various ways.
Quoting the investigation committee:
Most likely explanation is that it's some combination of graduate students signing up to work with a prestigious PI, being put under huge pressure to produce results, and taking the low road rather than destroying their career. By the sounds of it, the data was passed around enough times that nobody was sure where it came from, and so didn't consider themselves to be doing anything more fraudulent then trusting their colleagues. It's entirely possible that the PI didn't know - or didn't want to know - that this was happening. His name is on most of the papers but that's standard for a PI and doesn't prove he did anything except fund the work.
Should the PI be punished, in the absence of positive evidence they did something wrong? Possibly. Probably, even. I can certainly see the argument for it and it might help. But at some point you have to do something to make academia less soul-destroying, otherwise it's like beating a horse and then killing it when it kicks.
You can of course control all of this to some extent by regulating how work is done and how data is handled, and in the last twenty years we mostly do. A big part of what the PI is getting dinged for in the original report is not following the appropriate guidelines. But academic labs aren't big pharmaceutical companies with lots of money to spend on compliance, so research output takes a big hit when this regulation becomes strict.
Secret data but more importantly secret code (any programs, algorithms, statistical techniques, data cleaning, etc.), would never cut it in the professional world. If you're a data scientist or a product manager proposing a change to a company's business processes you need to have your work in source control and reviewable by other people. There's no reason academics can't do the same. Make the PI responsible by default unless they can show fraud in the work their underling did. If they didn't review their underling's work then the PI is fully responsible. This would have the added benefit that researchers would learn useful skills (how to present work for review) for working in industry.
More options
Context Copy link
Perhaps that should change, and perhaps one of the ways we could make it change would be to penalize the names on the papers for fraudulent work.
More options
Context Copy link
More options
Context Copy link
The problem with any proposal to punish large-scale indirect and negligent harms that arise from engaging in some otherwise permitted activity carelessly is that people only get excited about the prospect of doing it to people and professions in their outgroup. It's easy to believe that academics or COVID policymakers should be forced work under the looming threat of punishment for any consequences that can be traced to their research or policy when the personal suffering of scientists and COVID policymakers leaves you cold and you suspect that either activity has no or negative value anyway. However, if you are not willing to accept that the same principle apply to people and professions in your coalition, you will never be able to gather the requisite support from the other side. Would you be happy to bite the bullet and also develop ways to target and severely punish company executives for the same thing - say, holding everyone near the top of the tobacco industry responsible for lung cancer cases, or oil executives for deaths that were statistically traced to global warming? What about holding every media personality who signal-boosted dubious COVID treatments ranging from Ivermectin to antibiotics accountable for projected delta-deaths that resulted from people following their advice? If not, the other side of the culture war will rightly suspect that you actually just want to make life hard for their champions while ensuring that yours can continue operating unencumbered.
They’re not analogous. People consume oil and tobacco willingly. They don’t, or wouldn’t, aquiesce to GoF research or a cardboard bridge. If we’re trading horses, you could have oil spill C-suits. Of course we can’t draconically punish a low-level technician for releasing a virus if he was not adequately compensated for that responsibility.
I know this secretary in a big company, she would produce and present documents for the (somewhat dumb) CEO to sign and she was revolted that he wouldn’t understand, or even read, most of what he was signing. One day she was in a hurry to get some papers through, and instead of waiting for the CEO, another executive suggested that she sign them herself. “I’m not paid to sign!” she replied. And that’s true. The signature has to mean something. Some skin in the game, buck stopping power. In theory, they’re compensated for it, but they’re not really accountable for it. That's the way the managerial class wants it. But the rest of us don't have to take it.
A cardboard bridge would probably get glowing reviews. Wait, no, it already has. They’d never sign off on one for actual traffic…until they do, because it’s cheaper, greener, or just novel. If it could possibly make someone money, they’ll try it.
Same for Gain-of-Function. People want new technology. People also don’t think very hard about externalities. So they vote for people who run programs that do GoF, and they go into biotech programs in college, and they do GoF research to fund a PhD.
The most obviously terrible ideas are the most likely to become a contrarian marketing gimmick. Business as usual.
The material is immaterial – what’s punishable is the broken promise (signature) that the bridge would stand.
I fear that in your attempts to shield some people from accountability you have descended into total nihilism. “No one will ever be held accountable, even if they build a literal doomsday device.” “Okay! That’s bad!”
More options
Context Copy link
More options
Context Copy link
If I got frustrated in a position like that and my concerns got knocked back, the next documents the CEO would be presented to sign would be requests for an increase in how much I was being paid.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I mean, there are two separate issues here and it's important not to conflate: 1) scientific fraud i.e. bad stuff happened because you provided Science and people trusted you but the Science was a lie; 2) mad science i.e. you did experiments that were -EV because the experimental procedure caused or risked harm.
With regard to #1 I would caution you that there is a really-obvious failure mode:
Regarding #1, they already do this.
The PIC researchers don't generally get jailed for the bogus fraud, though.
In corrupt countries that's how the police works. You get expropriated and imprisoned but can't complain since the courts will side against you.
That's bad but it doesn't follow that we should abolish the police. We should abolish corruption and policing is a useful tool for that, in principle.
My point is not that it's impossible to fix academia from outside, just that there are hostile actors who are very good at rules abuse there and a treacherous epistemic environment, and naïve action that doesn't take those into account risks backfire.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Based on this and other high profile cases it seems we could have a high standard for proving fraud.
On the other hand in these cases of fraud maybe we wouldn’t have confessions if there were more serious consequences.
The thing I think certainly I have been catching up on the last eight years is how important culture is for plugging in the gaps of laws. It’s like this type of fraud should be a career-ending scandal, not necessarily illegal. The law is too blunt an instrument I think.
More options
Context Copy link
Surely anyone who disagrees isn't sufficiently Xist.
More options
Context Copy link
More options
Context Copy link
Thanks for the article. It exhibits a pattern I’ve noticed of wanting to signal sophistication and subtlety by injecting confusion and the resolving it.
Here we have an article about a guy who has acknowledged using fake data.
Why does the article waste time discussing accidentally incorrectly performed research?
It’s so the author can navigate the murky waters created by introducing a fairly unrelated topic, then sieving out the original point which anyone could have made in two paragraphs.
Between this, the Alzheimer’s stuff, and many others it seems pretty dire for the trust the scientists crowd.
He very explicitly did not admit to using fake data:
Sure, there's a good chance he's lying about that. But it seems like an important distinction.
Read a little further:
More options
Context Copy link
More options
Context Copy link
Or they have a quota for article length and needed to pad it out.
This is a very common thing you encounter where the first 3-5 chapters of some nonfiction book are compelling and directly relevant to the author’s main thesis. Then the rest of the book is vaguely related to the thesis, mostly it’s other things the author has studied and can write competently about.
Publishers are reticent to put an 80 page book on the shelves even if that’s the best version of the book the author can produce.
I've noticed this so much in nonfiction books I've read lately, and a few fiction ones too!
100-120 pages of really amazing insights that are explained and applied in intuitive ways. Then another 100 or so pages of banal platitudes that vaguely follow from the rest. A big one is applying whatever insights they've made to social issue du jour. "Here's how my groundbreaking research into quantum hyperlinking across nonlocal space can help address... climate change." (I made that up, to be clear)
Big ideas don't necessarily need a novel-length treatment to explain in full, even addressing all the possible implications. But selling books is one of the few proven ways to make a buck from specialized academic research (until you have a saleable product, I guess) so that's the mold they'll trying to fill.
More options
Context Copy link
I basically never get past 1/3 of non-fiction books, but I often feel slightly guilty about it. It seems arrogant to say the last 2/3 is filler or stuff I can work out myself but…
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link