site banner

Culture War Roundup for the week of December 30, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

9
Jump in the discussion.

No email address required.

I can see you put lots of thought into this.

It's interesting to think about! I've enjoyed this discussion.

And then who is doing the ECM and other duties?

Ironically I think a lot of this happens ~automatically now (I'm...not sure this is always good!)

Isn't that what fly-by-wire does? Can't we program an AI to go 'if the situation is desperate give it a go, burn out those engines to scrape out a victory'?

Yep, but from what I understand most fighters have a feature to allow the pilot to depart controlled flight, in case he needs to. Can we program an AI to utilize such a feature responsibly? I dunno! I'm pretty sure we can program them to use it, I'm just not sure we can practically teach them when to read the rules. I'm not sure software programmers are the best people at figuring that stuff out. We might be able to do it with Deep Learning, though. But (as I've nodded to before) that essentially involves training AI to fight the fight you think will happen, which makes me uncomfortable enough when we do it for people. If the AI is capable of flexibly adapting if our model of the fight we think will happen, great! If not, that's when I start to get deeply nervous. People are pretty good at adapting, because they dislike dying.

Broadly agree with all of your other points about the good features of unmanned aircraft.

Maybe you could use less reliable engines, crank out airframes that aren't expected to last 20 years because they don't constantly need to be flown to train the pilots. We could have the T-34 1942 of aircraft, a reign of quantity. As far as I can see, Loyal Wingmen cost 1/3 or 1/4 as much as a manned fighter over the whole lifetime. So going from 4 manned fighters to 12 unmanned isn't that unreasonable. You might say their capabilities are inferior but the F-35 somehow has a shorter range than a significantly shorter MQ-28, there are swings and roundabouts.

Yeah, from what I understand the US next-gen aircraft (NGAD) is also going in this direction – not necessarily (but maybe?) in the low engine life, but in terms of trying to make cheaper, more flexible airframes. But worth noting the MQ-28 can't go supersonic and, from what I can tell, doesn't have a radar or any munitions payload. I'm not super surprised it has a range advantage over the F-35.

I can see the issue here but you could make an AI subroutine that assesses 'is this really a civilian aircraft - judging speed, radar imagery, size, IR and visual evidence?

I definitely think you can – this is what I find a bit concerning. Let's say you program the AI never to take a shot against a transponding civilian aircraft without getting VID. Great, if China finds this out, free BVR shots all day long. Whereas if you have a human in the loop, they can make a judgment call about that sort of thing. (This is particularly concerning, in my mind, given that there are a lot of overlap between civilian and military aircraft designs, particularly with regards to tanker and ISR aircraft. I definitely agree with you that humans goof stuff like this up, I think my concern is that the programmers and bureaucrats directing them would be tempted to try to make sure we never shoot down anything on accident again and then find that we've handcuffed ourselves to an unworkable or exploitable standard. (In my experience with LLMs, they are...not ready to be given access to weapons. I think a combat AI could be programmed to be less error-prone, but that's where my concern about the lack of flexibility comes in.)

But why not move faster?

Well, from my perspective, the question is "do we want to risk 100% of our air dominance on a technology that hasn't been used in air combat before," and my answer is "not really." And I'm not selective about this; I think it's good, for instance, that the United States has finally rolled out a new long-range missile, in part because range is good, but also because divergent designs lead to less risk – you have more than a single point of failure. I'm not saying we shouldn't attempt to utilize AI. I am saying that, until we are able to prove it at a very large scale (and ideally in combat), we shouldn't place all of our eggs in the one point of failure, whether it's AI, or a single type of weapons system, or a single plan to defeat any enemy. This is why I prefer to loyal wingman and optionally-manned programs to deciding that the age of humans is over – it hedges against known and unknown risks in a way you can't do if you don't procure any manned aircraft.