site banner

Culture War Roundup for the week of January 15, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

13
Jump in the discussion.

No email address required.

If you're worried that this will lead to lax academic standards or shoddy research practices, I'd reassure you that academic standards have never been laxer and shoddy research is absolutely everywhere, and the existence of review boards and similar apparatchik-filled bodies does nothing to curb these. If anything, by preventing basic research being done by anything except those with insider connections and a taste for bureaucracy, they make the problem worse. Similarly, academia is decreasingly valuable for delivering basic research; the incentive structures have been too rotten for too long, and almost no-one produces content with actual value.

Whew, lots of thoughts. Let's start with total agreement that academic standards have never been laxer, that shoddy research is absolutely everywhere, review boards and such have done nothing to curb it, and that almost no one produces content with actual value. Moreover, I would agree that the incentive structures have been too rotten for too long, and I think that this is a huge driver of the previous four items. "Publish or perish" has creeped ever earlier, and I've actually stopped going to conferences, due to the flood of interestingly-titled talks that end up being, "So, I'm an undergrad, and this is really preliminary work, and... [total garbage]." The undergrads feel like they have to publish bullshit in order to get into grad school, the grad students feel like they have to publish bullshit to get a post-doc, the post-docs feel like they have to publish bullshit to get a professorship, and the assistant professors feel like they have to publish bullshit to get tenure (after tenure, paths tend to bifurcate a bit more, it seems), so the assistant professors are more than happy to push everyone down the chain to go ahead and publish their bullshit (so long as his/her name is on it, so it adds to a count on Google Scholar). It is all for the sake of number go up rather than advancing knowledge.

This trend has been decades long, but I would argue that it has also been exacerbated by one particular huge drop in barrier to submission - the rise of China. I don't know if they've really subscribed to our fucked up incentive structure, because I just don't know as much about how their unis work, but the regional flood has gone global. Not to say that there is zero good work coming from there (I finally recommended acceptance of my first paper from a Chinese group, but it's unsurprising that it was an exceptional case, as the main guy in that group is good enough that he's now taken a position at an excellent Western uni), but the quantity of folks pushing out the quantity of digital ink from there is astounding, and the vast majority of it is undergrad-tier bullshit. It really makes me pause when you say:

If anything, by preventing basic research being done by anything except those with insider connections and a taste for bureaucracy, they make the problem worse.

I really go back and forth in my head. Does having a sort of mental rule that nearly automatically rules out all Chinese work help? Probably so, like 99% of the time. Does having a sort of mental rule that if the first author is an undergrad from a low-tier uni, it's probably shit help? Probably so, like 99% of the time. I joke sometimes that I've never even seen a contemporary masters thesis that was at all interesting (there are some legendary ones from the past, by legit giants in their fields). Having some form of statistically-informed heuristic realllly saves a lot of time and effort that would otherwise be 99% wasted. This sort of "credential chauvinism" obviously won't stop the flood; incentive structures for conferences/journals are also fucked up enough that there's zero chance any will adopt a position of basically, "If you're an undergrad or from a Chinese university, your submission will basically be auto-denied." But they're adopted very pragmatically, almost out of necessity, by the good researchers who just don't have that much time to waste. (I know the irony of writing this in a bloody comment on a rando internet forum.)

My personal strategy is basically defection/free-riding. I've cultivated a network of really talented profs who I personally know, have personally spoken with enough to know how they think, with a not-insignificant percentage of them being assistant profs. They basically feel forced by their incentive structures to wade through all of the crap and constantly engage with all the conferences and reviewing and editing and shit... and they like magically filter through it all and bring up the small number of diamonds in the rough. Is it the best-tuned filter? Possibly not. Might I gain some additional insight by wading through more of it actively? Possibly. But damn if I'm going to ever feel like the cost/benefit tradeoff is going to be worth it anytime soon. But the nature of defection/free-riding is that not everyone can do it without catastrophic consequences.

Getting back to the point of LLMs, this is what I'm worried about. Sure, it'll make it harder for dyed-in-the-wool bureaucrat-and-nothing-more folks, but it'll also make it harder for us. It'll be Eternal September for academia. What possible filters can stand?

Meanwhile I expect harder fields like biomed and material sciences to (continue to) be supercharged by the capabilities of ML, with the comparative ineffectiveness of institutional research being shown up by insights from DeepMind et al.

I don't have as much to say on the social sciences bit, and you may well be perfectly on point there. For harder sciences, I'm really not sure where this will go. Traditional work has been extremely structured in form, whereas I do subscribe to the Nick Weaver School of ML in Research, which is, "ML is great for when you want to model something that you have a good reason to think is structured, but you have no idea how to model it, and you're okay with it being fabulously wrong some percentage of the time." Biomed and materials science are perfectly positioned to reap the gains of this. Those areas in particular have fantastically complicated underlying structures, and at least the experimental folks don't much care how we get a half-decent idea of what to try to build, so long as each iteration doesn't take too much time, we can just try a bunch of them and see what works out. Huge potential for big experimental gains, and a decent chance that experimental gains will subsequently push the theory part forward around those new lodestones. My view of the trends in institutional research has been that they've gone full steam ahead at trying to embrace it for every problem under the sun, even when it doesn't make much sense. But for every work that gets accelerated, every wonder material that gets developed, how many shit-tier "ML" papers will be submitted/published in the field that turn out to just be awful, obnoxious noise? (While this last paragraph is similar in a concern about the flood of crap, it's a bit distinct as it's less focused specifically on the administrative side of the crap production/evaluation.)