site banner

Culture War Roundup for the week of December 30, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

9
Jump in the discussion.

No email address required.

Hahaha, pilot mafia throttling the antigravity program so they can still buzz the control tower is...not one of the craziest theories I've seen.

(The truth is though that I don't think unmanned aircraft are yet in a position to replace manned aircraft, and I'm not sure they will be able to fully barring, basically, AGI.)

I'm not singling you out here because I hear this a lot and I wonder... what is it that pilots do that an AI can't that compensates for the training expense and kinematics costs of having a pilot? The pilot can't do damage-control on the plane mid-flight. They don't pick out targets, the sensors achieve lock on. They're not tactically superior, that's been proven with dogfighting simulations even between equally performing jets. It's all fly-by-wire these days, their muscles aren't necessary.

I guess a human might be better at the ethics of 'do we bomb this truck or not, given how close it is to civilians?' But again humans have high variance and it's not clear that this is so.

what is it that pilots do that an AI can't that compensates for the training expense and kinematics costs of having a pilot?

Here is what I think the answer is: recognizing jamming and adapting immediately to new uncatalogued threats or situations. You're correct that at this point modern fighters are basically fusing a human with "AI" so the question of what can humans do better anyway? is a very valid one.

Modern jamming is extremely good. It's hard for human pilots to tell when they are being jammed. [Edit: or at least that's the point of modern jamming/EW/deception programs such a NEMESIS as I understand it, but, to be clear I've never been in a position to personally test this, so take all of this with a grain of salt, bearing in mind that I just read about this stuff for fun.] In fact I think there is a decent chance that it's over for the radar-only bros in the Next Big War, both because of jamming and because using your radar makes you a big ole' target. (F-22 hardest hit!) But I think a human has at least a chance to recognize and understand that he is being jammed even if his fancy computer and his radar does not. That's not really true if you just have the fancy computer and the radar: if the 200 radar targets that appear on your screen suddenly already passed your hardware and software's anti-jamming features, your computer is going to think they are real. A pilot will know that they aren't. Now, as AI gets better and better, this will be less of a problem - maybe you can do deep learning on jamming, maybe you can put Claude in the cockpit and he would realize the 200 targets are fake.

TLDR; we already have AI in the cockpit now and it can almost certainly get fooled by modern jamming/ECM, it's good to have another set of eyeballs in the cockpit.

Secondly, new threats or situations. You touched on this a bit inasmuch as a human might be able to parse an ambiguous ethical situation better than an AI (although I agree, high variance) but consider a situation where the trained tactics fail. Let's take a hypothetical: air war with China, four planes leave the carrier, one comes back. It turns out that towed decoys aren't effective against the latest Chinese air-to-air missile, and the only reason one guy came back was because he goofed up the decoy deployment like an idiot and ended up maneuvering radically to survive the engagement. Now if you had had AIs, none of the planes would have come back. And, worse, you would have had no idea what happened to them, because you were operating on a mission and in an environment without any communication ability over the combat zone. (This is something else that is nice about pilots: they eject from the plane, and they float, so you can recover them a bit more easily than you can a black box at the bottom of the sea).

Now you have to tell all of your pilots "don't deploy decoys, you're going to have to make some very specific maneuvers to defeat the Chinese A2A missile threats." Said and done. And if you have ~AGI computers, you can tell them the same thing too. But if you have anything a bit dumber, you're going to have to rewrite their software on the fly to defeat the new threat. And it's going to suck if all of your software engineers aren't on the boat. (It would also suck if your adversaries got a hold of your codebase, or reverse-engineered it with their own AI, and used it to instruct their AI fighters on how to pwn your AI fighters every single time. Not exactly possible with high-variance humans!)

TLDR; it's not good to have new functionality gated behind civilian software engineers stateside during a time of war. (To be fair, I think the Pentagon recognizes this and is working on in-housing more coders.)

Now, imho, none of this means that AI is useless or that drone aircraft are useless in a peer war. But I suspect that this (and the fighter mafia) is why you're seeing things like the "unmanned wingman" approach being adopted (and not only in the States). The "unmanned wingman" approach basically lets you build aircraft with cutting edge AI and launch them right into combat, but because you aren't taking pilots out of the loop entirely, you'll still retain the flexibility and benefits of having an actual human person in the combat environment.

Maybe that won't be necessary - maybe everything will all go according to plan. But I don't think the AI is quite there yet.

Interesting points. In the back of my mind I was thinking that maybe AI aircraft would be more tactically flexible since you can change up their training in a quick update though I can see how it would also be bad if you had software leaks. But the F-35 software has already been leaked to China half a dozen times, they even have gotten some Chinese made parts into the supply chain.

Also one hopes that they'd put visual cameras on the plane. They already do I think, F-35 pilots have AR that lets them see through the plane I believe.

Even then, I still expect that the unmanned aircraft's advantages in price, quality and scale would be enormous. It wouldn't be 4 fighters going out on that mission, you could have 12 or 16 because training fighter pilots is inherently costly and slow. You would have smaller, faster and more agile aircraft, without human limitations. Whatever crazy dodging a human could do, the machine would easily surpass in terms of g-forces. Each fighter would have the crushing reflexes of a machine and that ruthless, ultra-honed AlphaGo edge of having spent a trillion hours in simulation evolving superior kills.

You could afford to lose those jets on risky missions - even suicide missions if you decided the gains were worth it.

But the F-35 software has already been leaked to China half a dozen times, they even have gotten some Chinese made parts into the supply chain.

YEP! And an additional concern is that if you had any backdoors in a an AI aircraft to enable it to be remotely controlled, it would be vulnerable to a cyberattack...that could impact 100% of the airborne fleet at once. But I'd be surprised if (on the flip side) we put a fleet of drone aircraft up that couldn't be manned remotely, in case the AI went wonky for whatever reason.

Even then, I still expect that the unmanned aircraft's advantages in price, quality and scale would be enormous. It wouldn't be 4 fighters going out on that mission, you could have 12 or 16 because training fighter pilots is inherently costly and slow. You would have smaller, faster and more agile aircraft, without human limitations. Whatever crazy dodging a human could do, the machine would easily surpass in terms of g-forces. Each fighter would have the crushing reflexes of a machine and that ruthless, ultra-honed AlphaGo edge of having spent a trillion hours in simulation evolving superior kills.

Well, this sounds good, but it's worth considering a few things:

  1. I am open to correction on this, but airframes and associated costs, not pilots, are the constraining factor in aviation. (Easy sanity check: do squadrons have more pilots than airframes? Yes. Do airframes require more maintenance and downtime? Also yes ...probably, Air Force guys gotta hit the golf course I guess...) Removing pilots doesn't remove the logistics footprint, and it doesn't make aircraft dramatically cheaper, which is the pain point. Fighter aircraft are sexy and lots of people want to fly them. I agree that in a high-attrition war training pilots could be a bottleneck, but even then we're likely also hitting aircraft production bottlenecks. If all of our aircraft get shot down, we will still have spare pilots left over. Now, at the point where we start getting robotic logistics, I agree that those advantages start to scale.
  2. Humans don't actually weigh all that much relative to an aircraft. The F-35 is 29,300 pounds empty and nearly 66,000 at max takeoff. Figure 200 pounds for a human operator, 200 pounds for an ejection seat, however much else you want for life support – even if you estimate a human adds a whole ton to the equation, you're looking at, what, 6% dry weight and 3% fully loaded weight? Sure, every little bit counts. I'm just saying it's probably not a miracle.
  3. It's true that robots won't black out from high-G maneuvers, and this will give them an edge. But it's also true that pilots are capable of doing things that they aren't supposed to do, like "flying the aircraft in such a way to do warp the airframe." There are structural limitations to these things, and human pilots are very capable of surpassing them (much to the chagrin of everyone else in the logistical chain.) Removing pilots from the aircraft won't magically make them capable of doing sick dogfighting skills; they still have worry about things like "will this snap my wings off." This, incidentally, raises another point in favor of human pilots – robots presumably will not violate NATOPS to gain an important advantage in a dogfight.
  4. Missiles (which are basically AI-enabled suicide drones) already surpass manned fighter aircraft in terms of ability to pull Gs, but manned fighter aircraft are still capable of defeating them kinematically. Replacing pilots with computers won't likely change this, either.
  5. The expensive parts of an aircraft aren't things like "the ejection seat," it's things like the radar or extremely bespoke metallurgy research for high-performance engines that presumably a purely unnamed force will still need to procure.

Each fighter would have the crushing reflexes of a machine and that ruthless, ultra-honed AlphaGo edge of having spent a trillion hours in simulation evolving superior kills.

Sure. My concern about this is in part because I don't think it will produce high variance. If you make your machines really deterministic, then I think outcomes become more binary, which means you have less opportunities for feedback if those outcomes are binary in a way that you don't like. Machines are extremely predictable and this is not necessarily a good thing. [And if you read the stuff about AI beating pilots in a dogfight it was, IIRC, because it was willing to take head-on gun shots, which aren't preferred by human pilots because it's risky to be nose-on to another fighter aircraft for collision-avoidance purposes. That's interesting – and particularly very relevant, for what is expected to be a small portion of future air combat – but if they've tested them without a human in the loop in a complex "realistic" air combat scenario I haven't heard. Doesn't mean it hasn't happened, though!]

The other thing – and honestly, this might be more relevant than the technical capabilities – is that there will be political resistance to outsourcing decision-making entirely to a machine. At what point do you want a machine to be making decisions about whether to shoot an aircraft with a civilian transponder? Even if machines can make those decisions, people will feel more comfortable knowing the important decisions are being made by someone who can be held accountable (and also that a software glitch won't result in all aircraft with civilian transponders being targeted.)

One of the concerns I have about any program that is predicated on being able to communicate with base is that that may be risky or prohibited in a future hostile air environment. This applies to loyal wingman programs and to any sort of drone that's supposed to be able to call back home. This is an entire tangent I could make a lot of ill-informed speculation on. But the TLDR is that if you think you might operate in an environment where you can't call home and there are certain decisions you think your pilots might need to make that you don't want drones to have to make, you'll be needing pilots.

(To be fair, in real life the pilots would typically get ground control sign-off on these sorts of decisions if possible. But if your plan is to let ground control make important decisions, imho you're looking at a fancy remotely-piloted aircraft. And I think that's the direction we are going, at least in part – humans make important engagement decisions, the loyal wingmen will carry them out.)

You could afford to lose those jets on risky missions - even suicide missions if you decided the gains were worth it.

Yep! That's the point of stuff like the loyal wingman program, the jets are "attritable." Same with the optionally-manned aircraft, where you can remove pilots if you assess the mission is very risky. And I think this is a good idea: it hedges bets against weaknesses in AI while opening the door to utilizing them to their fullest potential. I'm not anti-drone, I just don't think the AI is ready for the quantum leap that removing humans from the picture entirely would represent, and it might never be entirely barring AGI-like capabilities.

Previously I said that I didn't think unmanned aircraft were ready to replace manned aircraft, but let me add a bit more nuance – I do think that unmanned aircraft are ready to supplement manned aircraft. I think moving to a world where perhaps we have fewer manned fighters makes sense in the future, possibly now. I think loyal wingman programs are, at a minimum, worth experimenting with. Perhaps in a future generation, we'll be able to take humans out of anything resembling fighter aircraft and move them back to manned control centers, perhaps flying or perhaps on the ground (or perhaps we'll replace aircraft with munitions entirely – there's a point where cheap enough cruise and ballistic missiles make a lot of aircraft pointless). My guess (again, as per loyal wingman) is that we'll see pilots moved back from the frontlines of air combat where possible. I suspect part of the move to this will be precipitated or accelerated, not by AI technology, but by laser technology. New technology may end up making fighter aircraft as we know them obsolete as a class in the future.

But unless we're able to incorporate a pretty intelligent AI into an aircraft, I think that replacing aircraft with AI will look a lot more like "replacing all aircraft with missiles" – which, again, may make sense at a certain point. But it will probably mean that the aircraft we will be employing – again, barring ~AGI – won't be a 1:1 replacement for the capabilities of modern fighters, they will be employed in a different way. Maybe we'll see manned fighter aircraft retained for politically sensitive things like air policing missions, but not for relatively straightforward (and risky!) tasks such as deep penetration strikes on set targets.

If you're curious enough about this I can try to run down an actual pilot of my acquaintance and ask him for his thoughts on our exchange.

I can see you put lots of thought into this. I'm not one of those people who holds the secrets of the universe in terms of aviation... I sense you have some expertise here, not everyone knows what NATOPs is.

But I still find myself thinking 'if three loyal wingman are good, a swarm of four should be better (and without the vulnerability and expense of the manned fighter)'. Flying a fighter jet is hard work, especially if you're managing all this tech in addition to your usual workload. You might have a weapons officer devoted just to managing the swarm. And then who is doing the ECM and other duties?

Can't we program AI pilots to not destroy the aircraft in flight? Isn't that what fly-by-wire does? Can't we program an AI to go 'if the situation is desperate give it a go, burn out those engines to scrape out a victory'? Or use the gun in an aggressive way that a human surely wouldn't have the precision reflexes for? AI doesn't neccessarily have to be an ultra-rule-abiding automaton stuck with orthodox tactics, as you point out it can also be a risktaker daredevil ready to sacrifice to get the mission done. It will be whatever we program it to be.

And sure, using the gun is unlikely. But if the goal is flinging missiles at long range and then dodging the missile coming back at you, that seems like a job for a machine. Faster turns, perhaps accelerating in directions that are particularly dangerous for humans.

Imagine if all the instruments in the cockpit were gone, if the blazingly high-tech helmets didn't even need to exist. No need for air pressurization, no need for this big circular space and glass canopy in the aircraft. It could be super-thin or superior in some design respects without having the trade-off of having a cockpit. Lower complexity.

Imagine if the maintenance costs on these fighters were dramatically cut because the pilots didn't need to keep up flight hours. That's a huge saving. No trainer aircraft!

Maybe you could use less reliable engines, crank out airframes that aren't expected to last 20 years because they don't constantly need to be flown to train the pilots. We could have the T-34 1942 of aircraft, a reign of quantity. As far as I can see, Loyal Wingmen cost 1/3 or 1/4 as much as a manned fighter over the whole lifetime. So going from 4 manned fighters to 12 unmanned isn't that unreasonable. You might say their capabilities are inferior but the F-35 somehow has a shorter range than a significantly shorter MQ-28, there are swings and roundabouts.

And why the hell are civilian planes flying over airspace where there are two airforces slugging it out? I can see the issue here but you could make an AI subroutine that assesses 'is this really a civilian aircraft - judging speed, radar imagery, size, IR and visual evidence? Is it dumping flares? Is it shrouding a smaller drone?' You could customize the AI's defcon level depending on the mission, so to speak. Anyway, civilian aircraft get shot down all the time by air defences, accidents happen. I don't know what was going on in Azerbaijan or in Yemen, where the US was shooting at their own plane. Perhaps the software was to blame, perhaps it was human error. I don't see how there's a significant edge there for human systems, they're plenty fallible.

I agree completely with what you predict is happening, the new Chinese jet looks rather like a rear-line combat aircraft. Maybe that's where NGAD is heading too. Loyal Wingmen are great. But why not move faster?

I can see you put lots of thought into this.

It's interesting to think about! I've enjoyed this discussion.

And then who is doing the ECM and other duties?

Ironically I think a lot of this happens ~automatically now (I'm...not sure this is always good!)

Isn't that what fly-by-wire does? Can't we program an AI to go 'if the situation is desperate give it a go, burn out those engines to scrape out a victory'?

Yep, but from what I understand most fighters have a feature to allow the pilot to depart controlled flight, in case he needs to. Can we program an AI to utilize such a feature responsibly? I dunno! I'm pretty sure we can program them to use it, I'm just not sure we can practically teach them when to read the rules. I'm not sure software programmers are the best people at figuring that stuff out. We might be able to do it with Deep Learning, though. But (as I've nodded to before) that essentially involves training AI to fight the fight you think will happen, which makes me uncomfortable enough when we do it for people. If the AI is capable of flexibly adapting if our model of the fight we think will happen, great! If not, that's when I start to get deeply nervous. People are pretty good at adapting, because they dislike dying.

Broadly agree with all of your other points about the good features of unmanned aircraft.

Maybe you could use less reliable engines, crank out airframes that aren't expected to last 20 years because they don't constantly need to be flown to train the pilots. We could have the T-34 1942 of aircraft, a reign of quantity. As far as I can see, Loyal Wingmen cost 1/3 or 1/4 as much as a manned fighter over the whole lifetime. So going from 4 manned fighters to 12 unmanned isn't that unreasonable. You might say their capabilities are inferior but the F-35 somehow has a shorter range than a significantly shorter MQ-28, there are swings and roundabouts.

Yeah, from what I understand the US next-gen aircraft (NGAD) is also going in this direction – not necessarily (but maybe?) in the low engine life, but in terms of trying to make cheaper, more flexible airframes. But worth noting the MQ-28 can't go supersonic and, from what I can tell, doesn't have a radar or any munitions payload. I'm not super surprised it has a range advantage over the F-35.

I can see the issue here but you could make an AI subroutine that assesses 'is this really a civilian aircraft - judging speed, radar imagery, size, IR and visual evidence?

I definitely think you can – this is what I find a bit concerning. Let's say you program the AI never to take a shot against a transponding civilian aircraft without getting VID. Great, if China finds this out, free BVR shots all day long. Whereas if you have a human in the loop, they can make a judgment call about that sort of thing. (This is particularly concerning, in my mind, given that there are a lot of overlap between civilian and military aircraft designs, particularly with regards to tanker and ISR aircraft. I definitely agree with you that humans goof stuff like this up, I think my concern is that the programmers and bureaucrats directing them would be tempted to try to make sure we never shoot down anything on accident again and then find that we've handcuffed ourselves to an unworkable or exploitable standard. (In my experience with LLMs, they are...not ready to be given access to weapons. I think a combat AI could be programmed to be less error-prone, but that's where my concern about the lack of flexibility comes in.)

But why not move faster?

Well, from my perspective, the question is "do we want to risk 100% of our air dominance on a technology that hasn't been used in air combat before," and my answer is "not really." And I'm not selective about this; I think it's good, for instance, that the United States has finally rolled out a new long-range missile, in part because range is good, but also because divergent designs lead to less risk – you have more than a single point of failure. I'm not saying we shouldn't attempt to utilize AI. I am saying that, until we are able to prove it at a very large scale (and ideally in combat), we shouldn't place all of our eggs in the one point of failure, whether it's AI, or a single type of weapons system, or a single plan to defeat any enemy. This is why I prefer to loyal wingman and optionally-manned programs to deciding that the age of humans is over – it hedges against known and unknown risks in a way you can't do if you don't procure any manned aircraft.