This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
https://nationalpost.com/opinion/colby-cosh-ubc-covers-for-bad-science-in-homeless-cash-transfer-study
A major university (in Canada) published another one of those studies where they give homeless people money and see if they spend it on crack or job applications. Mostly this was met with admiration and joy by the journalist class. The more right-leaning publication I posted above is more skeptical, pointing out of some of the potential problems with the study:
...
This isn't that interesting, it's just a bad study done in Vancouver, what I found interesting was the writer starts with a brief summary of the replication crisis, to an audience that is presumably not intimately familiar with it:
There has been and is lots of discussion here about relaying rationalist concepts or ideas to outsiders or average random people in Mottizen's day-to-day lives. With the rise of culture war divisions, and especially the political rhetoric surrounding the Coronavirus Lockdowns and other policies, I'm wondering what approach if any you use when talking to acquaintances or friends who skew liberal, who broadly are happy to have the inertia of universities or the intelligentsia on their side, that you often reject social science research or findings unless personally having vetted them, without sounding to them like a low-IQ backwater hick redneck science denying flat-earther. I suspect that this is impossible.
I do not, because all the motte has taught me is that it's pointless. You don't convince people, you shift the ground under the feet and they go along with it.
More options
Context Copy link
It is in fact impossible because for a given value of denying science you are denying science. Remember Fauci saying "I am science"?
More options
Context Copy link
There are probably some good heuristics to cut through dubious social science publications, from simply ignoring it, to ignoring journalism about specific studies while perusing the study yourself. It seems the op-ed writer didn't understand some basic points about the study. He seemed critical of the (not unusual) large amounts of screening/filtering of participants. This only means that it isn't a study about homelessness in general, which isn't necessarily good or bad. The NP author also seemed to imply a (not unusual) about 50% loss to follow up. This didn't happen. The half of people they lost contact with were never enrolled in the first place. They did exclude people with severe drug and mental problems for ethical issues, fearing overdose. Nevertheless, ~15% of the participants had moderate drug problems, and about 50% had mental health diagnoses. So this seems to make the case for ignoring journalists. The study results seemed to indicated that giving a well screened subset of homeless reduces the State and saves money. Its one study. And I'm reminded of this Oren Cass article on "Policy Based Evidence Making". So while I'm optimistic, I'm only about 2% swayed. More study is needed.
https://www.nationalaffairs.com/publications/detail/policy-based-evidence-making
Well, the easiest person to fool is myself, so I'm generally skeptical of unreplicated social science (there have been some fantastic, salacious recent scandals!). Plenty of liberals write books and papers making the case for skepticism of social science, so I just mention those books, the reproducibility crisis, the math behind it, things like Bem (iirc there is also a study that proves you can age one year backwards), the recent scandals, etc, and the conversations are pretty normal.
More options
Context Copy link
I think your best appeal is to basically say that what they think of as 'science' is actually popular media summaries and partisan officials/profit-driven corporate PR/etc selling their interpretation of the actual science, and there's lots of reasons not to trust those summaries and wait till you can read the original papers.
More options
Context Copy link
I don't really see the objection to the original study? Sure if they claimed to be testing all homeless then they're lying, and probably some popular media outlets reported on it that way and were lying, but it's pretty normal to restrict your sample in various ways to reduce noise or just study a specific thing in depth. You just have to be upfront with what question you're actually answering, this seems like a good question to answer still.
High subject mortality is definitely a source of potential bias, if you have a reason to think that the dropouts are different from the continuers in an important way; but again, that's a caveat you should be putting in the results section and discussing the implications of, not a failure of the methodology or an invalidation of the findings.
As usual, I find it helpful to refer back to Debunked and Well-refuted. Yes, lots of science is done poorly, but lots of it is done well, or still gather relevant data that you can learn true things from despite the flaws. It's fun and easy to nitpick flaws in any paper because practical restrictions prevent anything from being perfect, and it's hugely useful to do that preferentially to the papers that don't support your positions, but it's a dangerous game to play.
We agree that some science is done well, meaning that it illuminates new technology that actually works.
We agree that some science is done... "poorly", meaning that the technology it purports to reveal does not actually exist.
The current norm is that all claims of new technology propagated through "official" channels are treated as true unless arbitrary standards of proof can be met to discount them. This norm is asinine, and examples overflow of claims of "new" "technology" propagating for decades, perhaps even for centuries without the slightest shred of actual function, even in the teeth of explicit and overwhelming disproof.
This junk science has massively reshaped our entire civilization, and much of it serves as the foundation of our social order. The people who produce it divert vast amounts of resources to themselves from the rest of us, use them to secure completely unjustified amounts of power over the rest of us, and suffer zero accountability for this behavior.
It is imperative that they be stripped of their resources and positions, permanently excluded from all influence over our society, and the changes they made be rolled back.
Sure, but we're not talking about popular media summaries or government agencies or w/e, we're talking about critiquing the original publication directly here.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I think there is an emergent strategy bubbling with dissidents who are not neck deep in grifting.
Ryan Faulk of the Alternative Hypothesis mused a lot on how to influence people into believing 'race realism' or HBD. One of the key points he raised was that it seemed like distrust of the mainstream was a prerequisite for belief in HBD. The important part here is that the 'mainstream' doesn't necessarily refer to academia, but news media. As soon as people associate the mainstream press with horseshit, you, as a dissident, have more inroads with them.
It looks like more people are finding their way towards this mechanism. It wouldn't be the first time the more respectable and socialized 'dissidents' find their way to the path trodden by white nationalists. Now we can wait for the respectable dissidents to utilize this mechanism and ride it victoriously to crush all the fake mainstream narratives, gain popular support and save the West!... Or maybe not.
A key element that keeps people from going all the way is right wing media. Ultimately their cause is based on the same environmentalist priors that fuel the 'left'. And it's guarded just as religiously. Not only that, there is an added weight of getting scolded by ones own side when going too far outside the bounds of the mainstream. On top of that there is a self congratulatory perception that you are proving your side to be good to the outgroup when you toss your own into the fire for any racist heresy. A sort of sacrificial lamb that you hope will quench the hate directed your way.
In any case, the gate that holds truth at bay isn't locked by what right wingers in general perceive to be the outgroup. If that were the case there is no way it could remain locked. The key to the gate is held firmly by a trusted member of the ingroup. And it will stay that way.
More options
Context Copy link
Definitely not impossible.
It helps to have more science cred then they do, and just generally not fitting the low-IQ backwater hick stereotype. It also helps if you reject science for scientific reasons, by doing things like citing science on the failures of science, or making specific critiques about how they used the wrong statistical test or whatever.
It's also useful to note that there are many aspects of science that these people reject themselves. Off the top of my head, there's the science relating to IQ, homosexuality, most of the COVID stuff if you pick the right point in time. Plenty more if you're willing to pick and choose bits that they will disagree with. "Science quickly becomes unscientific pseudoscience when touching on political hot topics, for example ".
What is the science related to homosexuality?
Twin studies demolish the whole "born this way" thing, and show that it's only "born half way there". Even less for lesbians.
The "scientific consensus" is very against conversion therapy, but the science itself is a similarly mixed bag as that on therapy for something like alcoholism. Few people go from alcoholics to never touching a drink again, and a lot of people end up "relatively unrecovered", but nonetheless therapy is considered "effective" for alcoholism because a good portion of people end up drinking less. The science on conversion therapy also shows some people showing complete success, and a lot showing meaningful-but-incomplete success, and also a lot of failure -- and a lot of the failure is in the over the top terrible attempts at therapy which would fail at any therapeutic target. Yet conversion therapy gets touted as not only "difficult and likely to fail" but blanket "ineffective" because "you can't change what you're born with" as if it's an immutable fact, even though the science flat out contradicts this.
The science also says that homosexual men are higher in narcissism, which isn't very flattering and people probably won't like to admit. And in diseases, which is sometimes admitted but often gets shoved away too.
There might be more, this is just what comes to mind off the top of my head. The point is that as a topic, this is not one where truth is welcome so the science regularly gets avoided or branded "not REAL science". Imagine a study comes out tomorrow showing that 80% of gay people left in charge of children rape them, and it's bombproof. Do you see people saying "Oh, shit, I guess we need to stop letting gays watch kids", or do you see people screeching and revolting against the potential truth being suggested? That's the test for whether people are actually doing science or science denial. If a new paper came out in physics suggesting perpetual motion was possible, and actually demonstrated it, physicists wouldn't screech they'd update.
More options
Context Copy link
Twin studies show that it is primarily not genetic which the "born this way" crowd hates to hear
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Did you look at the "randomized" cohort breakdown of the ~half of the already heavily screened applicants from a pool of already screened applicants (homeless shelters) they were able to contact for the length of the study? There are a few blazing red flags of confounded and polluted data, e.g., gender split, first time homeless, "want to be employed," annual income, receiving income assistance, receiving disability assistance, and more.
If you did a "randomized cohort controlled" study and your demo breakdown in the participants who lasted to the end of it were this different even after you have heavily pre-screened an already screened group from which you recruited participants, you should go back and try again because your randomization process either didn't work or your methodology influenced the results to such an extent as to confound the effect you were "studying," especially given the statistical power were talking about.
As far as I can tell, they're using individual participant outcomes while randomizing at the cluster level using an already small sample size and calculating the stat sig based on the participant n instead of adjusting downwards due to likely correlative effects from the clustering itself. They have different inclusion data for control/cash groups, i.e., control group had to complete a post-survey whereas the cash group were included if they simply received the cash, which is troubling because the groups had 20% different response rates which makes me think if they had the same inclusion criteria the left-over numbers either didn't produce significance even with their p games or a result they didn't like. They fiddle with a bunch of other stuff in odd ways which make me suspicious they're fiddling with an agenda, but I'm not diving into the appendix info, and I'm not going to request raw data they claim they will give out.
This study has an obvious agenda, the purpose of this study is to affect public policy, every methodology decision will bias the results in a certain way the authors want, and the abstract is written for journalists who share that agenda to push it likely glossing over all of the caveats which the authors littered throughout the paper rendering its application to policy all-but worthless even if that data wasn't poor (and it is).
It's a made-for-journalists "study" designed to create evidence to push an agenda. The study is very underpowered even if they didn't expect high attrition rates. These people aren't morons; they know what they're doing and it's high time we stop pretending they don't.
no, the humanities and "reputable scientific research" published in "reputable" journals earned skepticism if not outright hostility all on their own with this published study being yet another example of why
arguing that this study is roughly up to the standards of this area of research and writing isn't a defense of the study, but a condemnation of it, its authors, and the journal which published it
More options
Context Copy link
Yeah. I feel like what we now call the homeless should be split up into two groups: "those down on their luck" and vagrants. People in the first group need help and helping them is the economically (forget morally) right thing to do. With some money these people will get back on their feet and continue contributing to society. I'd wager they make up 80%+ of the current homeless, but they're not very visible. Group 2 is the vagrants, no amount of money etc. will help them and they are the group that make the lives of ordinary citizens worse on a day to day basis. we should be a lot more strict and come down on their ass like a ton of bricks the moment they start doing stuff that makes society worse off.
Conflating the two groups helps nobody except these vagrants and those who want to virtue signal about how good they are to these vagrants.
More options
Context Copy link
Correct, it's more like both hands, both feet, and possibly a small elephant on the scale.
They excluded the homeless who are the biggest problem, then when half their study population disappeared with the money that didn't count against their claim that the results were good.
But how about this:
Those people will eventually die, and stop being a problem.
Whereas, if we lived in a world that gave everyone money as soon as they became homeless, then in that world there would be no such thing as people who have been homeless for 5 years without getting that money.
In a world where the policy is 'people get money to help recover as soon as they become homeless', there would only be people who got money as soon as they became homeless, so those are the people you'd want to study to understand how a world with that policy would look.
Emphasis on "eventually". These people would in a less functional society just die from their numerous public drug overdoses, or freeze to death in winter, or choke to death on their own vomit drunk off their asses under a bridge, or just starve. I suspect this is what happens to them in China or the third world. In the USA or Canada, however, they are rescued from the brink time and again at either taxpayer expense or by private charities.
More options
Context Copy link
So will we all.
You're assuming that if you gave money to everyone as soon as they became homeless, this would solve the problem, and trying to avoid that objection by technically qualifying your statement to not actually imply that.
I am not sure how this statement is any different from 'you are secretly making a different argument than the one you actually said, and that secret argument is wrong, so you are wrong despite the thing you actually said being true'.
I have no idea how to respond to that other than saying 'no, I'm not'.
The argument you're implying is that that if we gave people money as soon as they became homeless, there would be far fewer 5-year homeless; that is, we'd prevent long-term homelessness by giving money to the short-term homeless.
The argument you stated is a tautology -- if we gave people money as soon as they become homeless, there'd be no long-term homeless who hadn't gotten the money.
The point is that this study is precisely what we would do to find out what would happen in that world.
That's the reason you do studies.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Yes, or they never had it in the first place.
Consider it shouted.
It would be a bad idea to try to replicate this study with the flawed methodology, because such a "replication" would be pointless. "We gave money to a bunch of short-term homeless people who didn't have any of the serious problems associated with homelessness. We didn't hear from about half of them again, but of those we heard from, they're doing better" is just pointless; it demonstrates little.
The purpose of this "study" was to manufacture evidence for programs to give cash to homeless people. It wasn't intended to be actually valid.
More options
Context Copy link
When you've seen enough of the sausage being made at academic journals, you're not surprised about the occasional bullshit thing that happens. I could see being that reviewer, thinking, "Whelp, their study design is pretty bullshit, but there's not really a way to say that it's wrong, especially if they've detailed it and are careful with the way they write their conclusions." Of course, just because they didn't write a totally bullshit conclusion in the article itself doesn't mean that there is anything stopping a mountain of journalists from filling in that work for them in the popular media. Like, what are the actual words that the reviewer is supposed to say to recommend rejection? And even if we could come up with some good ones, what good would they actually do in the face of a politicized editor who is willing to handpick reviewers who are sympathetic to the cause (so such good words would never be spoken) or to just run roughshod over one complaining reviewer (if one existed)?
More options
Context Copy link
I don't think the design was fraudulent or the researchers stooges, but the reporting of anything will become trivial if you add enough conditions and disclaimers. Obviously if you give some people thousands of dollars, /some/ of them will be /some/ degree of better off. But the bridge between the magnitude of that benefit from the study to "And that's why this would be a good policy for governments to implement" is weakened by every one of those conditions and disclaimers which reduce the level of generality the study is good for.
No, but the design is so obviously flawed before the data was gathered and further ignoring those they lost track of should have been more than enough for any reputable journal to reject the paper. I’m personally suspicious even of the good faith of the team here, these flaws are so large and obvious to outsiders that it seems impossible that people doing professional research could just accidentally make them.
More options
Context Copy link
I think the opposite is true, though, in this case. You don't need a well-designed study to tell you that drug addicts and the mentally ill and people who prefer sleeping on the street when shelter beds are available probably aren't going make the wisest decisions when given large sums of money. We already know that; the more interesting question is what happens if we give people who don't have all these problems large sums of money, because homeless people are usually lumped together into one homogeneous mass of derelicts.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Yeah, the critique felt a little weak. Specifically because the primary critique is noted in the study itself:
What would have been a more convincing approach would be instead focusing on this statement found in both the news story and press release, but missing from the study itself:
This statement does a lot of work in justifying the policy implications of cash infusions because if 31% of the homeless passed their entrance requirements, and it's a representative sample, then 31% or more of the homeless at-large could be conceivably impacted.
I have no idea if his assertion is true or not, but that seems the most potentially dubious framing.
I think that's a bit of a fig leaf: the authors knew or should have known that the paper would be portrayed without full disclosure as to its limitations, including in its own abstract and in the UBC piece itself (which does only mentions that the study excluded "severe levels of substance use, alcohol use or mental health symptoms", but not that it excluded the long-term homeless). Neither mentions the further filtering to only the sheltered homeless, nor the loss to followups.
Summaries by nature can't include all details, but people writing studies know what will get left out, and should recognize when that's going to be highly dishonest.
((There are other problems: the use of two preregistered analysis that are the weakest for predictive power and least repeated in the news coverage ("subjective well-being and cognitive outcomes") followed by a mass of 'exploratory' analysis that are repeated heavily but also scream garden of forking paths, especially combined with the condition grouping and when the study power looks like this. In addition to the attrition before study criteria were applied, the cash group had vastly lower response rates (74% vs 95%) on the 1-month survey than the control group did, which probably didn't have a huge impact in the statistical analysis but doesn't seem to get mentioned in the main paper proper at all just in the appendix. I also don't have a good mental model for the impact of "In the main analyses, participants in the cash group were included in the final sample if they received the cash, while participants in the control group were only included if they completed at least one follow-up survey." but my gut check's that it's not a good sign combined with that extra 21% dropout rate for the 1-mo survey.))
Look at the other two portions of the study: the authors did a couple survey-style efforts specifically to form approaches to "frame the benefits of the cash transfer to make it more palatable to the public, with the goal of improving public support for a cash transfer policy". Which, in turn, again only mentions filtering for "severe level of substance use, alcohol use, or mental health challenges", without mentioning excluding the long-term homeless.
This is pretty standard! For a different sort of culture war issue, I'd point to this recent discussion about eating beef. There are, if you dig into it far enough, quite a lot of disclaimers about how this is really talking about 24-hour recall rather than any more holistic analysis of consumption, and inconveniently the study didn't actually ask about meat at all so instead the analysis was filtered through one database to make predictions for likely meat portion of self-reported food intake which still didn't say anything specific about beef so the authors further just cut everything that wasn't explicitly spelled out as one type of meat or another in half. It's all there, and unlike most bad actors in this space it's not even paywalled!
But ultimately, this study methodology still requires the author to look at (trash-quality) data claiming that X people consumed Y ounces of meat that the authors believed (for some reason?) was 50% beef, and that this was equivalent to X people consuming Y/2 ounces of beef individually. And while st_rev was responding to the NYPost, which one could quite plausibly expect to be unusually useless even by popsci standards, it's not like the popsci groups are doing any better.
These social scientists aren't morons, despite their best efforts. The people actively studying how best to frame the benefits of an intervention have to at least considered how they're going to describe the intervention. This doesn't even mean that the general thrust of these studies are wrong; they're all too underpowered to tell us that they're even lying, once you move the fig leaf. But that's pretty damning for the broader field of science.
More options
Context Copy link
More options
Context Copy link
This is super wrong. Overwhelming majority of the homeless have mobile phones, with surveys ranging between 80% and 95% penetration. This is because the Feds, local municipalities and charities have special programs to equip them with phones.
Given the above, why would I put any stock in your critique of the critique of scientific paper?
Is your response really “sure, the overwhelming majority of homeless might indeed have mobile phones, but large fraction of them just carry non-functional phones with no plan or data, for no useful purpose”? Is this really the point you are trying to make here?
Yes, only majority of homeless have on-demand internet access wherever they go, it is no longer overwhelming. That’s why many of them hang out around free wi-fi, yes. But so what? What does it have to do with the argument you were making? Here, let me helpfully quote you:
Observe that the quote from the study you just gave clearly and explicitly contradicts what you said earlier, and supports what I replied to you with.
As a bonus point, observe that out of the homeless who are interested in using internet at all (66%, I assume that the third which didn’t use internet at all within past 3 months simply does not want to do it), more than three quarters do have it on their phone on demand.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link