site banner

Culture War Roundup for the week of February 19, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

11
Jump in the discussion.

No email address required.

Another problem is that there are more scientists than plausible paths of scientific enquiry.

Philip Kitcher has some useful insights here on the division of epistemic labour in science. In short, it's not always ideal to have scientists pursuing just the most plausible hypotheses. Instead, we should allocate epistemic labour in proportion to something like expected utility, such that low-probability high-impact hypotheses get their due. Unfortunately, this can be a hard sell to many researchers given the current incentive structures. Do you want to spend 10 years researching a hypothesis that is almost certainly false and is going to give you null results, just for the 1% chance that it's true? In practice this means that science in practice probably skews too much towards epistemic conservatism, with outlier hypotheses often being explored only by well-funded and established eccentric researchers (example: Avi Loeb is one of the very few mainstream academics exploring extraterrestrial intelligence hypotheses, and he gets a ton of crap for it).

There are also of course some fields (maybe social psychology, neuroscience, and pharmacology as examples) where the incentives stack up differently, often because it's easy to massage data or methodology to guarantee positive results. This means that researchers go for whatever looks bold and exciting and shiny because they know they'll be able to manufacture some eye-catching results, whereas a better division of epistemic labour would have them doing more prosaic but valuable work testing and pruning existing paradigms and identifying plausible mechanisms where it exists (cue "it ain't much but it's honest work" meme).

All of which is to say, I think there's plenty of work to go around in the sciences, enough to absorb all the researchers we have and more, but right now that labour is allocated highly inefficiently/suboptimally.

I wonder if there's a labor quality issue here.

At Google and Facebook of old engineers had near absolute freedom to choose what they wanted to work on. Google famously had 20% time, Facebook had a fairly permissive evaluation system that let you go do things like make desktop Linux for engineers better if you could argue that it was impactful. They were trusted to do this because hiring filtered for very talented and self-motivated people. The filter was so effective that you could let the performance evaluation process weed out the slackers. As a result, you got the best match between what people were working on and what they were personally motivated to work on. People put in long hours because they really wanted to see their idea working and out in the wild.

From what I've absorbed from fiction, it seems academia used to kind of be that way? Tenure was used to prove out that you were the real deal, and then you just worked on whatever tickled your fancy. My guess is that as academia grew and grew, you got more slackers, and no real weed-out mechanism, so you end up with lots of gatekeeping on what kind of research gets done.

Or maybe professors have had to specialize in grant-writing for a very long time, I don't know the field.

Academia is many things, but I don't see people going for (and getting) professorships as slackers. They almost invariably are smart, hard working people who could be making well into the six figures or more in industry. (Note: this is for what I'll just call real fields.)

The big issue is that it's so astoundingly competitive to get any kind of professorship, let alone a desirable one, that intellectual conservatism reigns supreme. Going off on some tangent that has high potential but is unlikely to bear any fruit is just too risky.

It’s a quantity AND a quality problem. Bear in mind that academia is universal, so one a problem is solved it stays solved and the first to solve gets 99% of the credit.

In most fields, there are a few approaches that look like they will bear fruit. I refer to these as ‘plausible’ above. The thing is, if you are not top-tier, you really don’t want to work on these, because other better-funded labs with cleverer researchers are already on it. But you don’t want to take the chance of going out on a limb either. What you want is something closely enough related to the sexy thing that it will get you money and prestige, without getting you steamrollered. In the same way that you wouldn’t try to DIY your own internet search algorithm these days, but you might try to make something useful that has slipped under Google’s notice and get them to buy you out.

The funders, who are somewhat out of touch, have to allocate research money in this environment.

One stable equilibrium is to only fund the top-tier people, on the assumption they are the most likely to make plausible breakthroughs. This is sort of what we already do. The downside is that you get groupthink in the big players and you miss out on the occasional transformative upstart. The other downside is that research is prestigious enough, and requires so much investment from would-be researchers, that you have a vast pool of no-hopers who will destroy themselves trying to make to top-tier.

What happens in practice is therefore that we funnel almost all the money to the big players and keep a secondary fund for any interesting-looking second tier work. The second tier is therefore a desperate scrambling mess of people trying to prove that their unlikely discovery will change the world. Many of them even delude themselves into thinking it’s true.

The decline in academia you note is mostly a function of the number of plausible research directions going down as the number of academics goes up. The result is a bunch of second-raters competing for scraps.

(Sorry, this is longer and ramblier than I hoped. Also, I should clarify that I was one of said second-raters. It’s not meant as an insult, just the sad result of hope meeting reality.)

EDIT: you got two replies in 10 minutes. Can you spot the triggered (former) academics?

All true, but you already explained why it can’t be otherwise. Scientific labour mostly isn’t allocated, it’s chosen, and nobody is going to willingly sign up for a 99% chance of unadulterated failure. Even if we had the resources to make such a life cushy, which we don’t.

I don’t think it’s an insuperable problem. A difficult one to be sure, but academic incentive structures are a lot more mutable than a bunch of other social problems if you have the political will. There’s also the fact that the current blind review journal-based publishing system is on borrowed time thanks to advances in LLMs, so we’ll need to do a fair amount of innovating/rebuilding in the next decade anyway.