This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
I can see you put lots of thought into this. I'm not one of those people who holds the secrets of the universe in terms of aviation... I sense you have some expertise here, not everyone knows what NATOPs is.
But I still find myself thinking 'if three loyal wingman are good, a swarm of four should be better (and without the vulnerability and expense of the manned fighter)'. Flying a fighter jet is hard work, especially if you're managing all this tech in addition to your usual workload. You might have a weapons officer devoted just to managing the swarm. And then who is doing the ECM and other duties?
Can't we program AI pilots to not destroy the aircraft in flight? Isn't that what fly-by-wire does? Can't we program an AI to go 'if the situation is desperate give it a go, burn out those engines to scrape out a victory'? Or use the gun in an aggressive way that a human surely wouldn't have the precision reflexes for? AI doesn't neccessarily have to be an ultra-rule-abiding automaton stuck with orthodox tactics, as you point out it can also be a risktaker daredevil ready to sacrifice to get the mission done. It will be whatever we program it to be.
And sure, using the gun is unlikely. But if the goal is flinging missiles at long range and then dodging the missile coming back at you, that seems like a job for a machine. Faster turns, perhaps accelerating in directions that are particularly dangerous for humans.
Imagine if all the instruments in the cockpit were gone, if the blazingly high-tech helmets didn't even need to exist. No need for air pressurization, no need for this big circular space and glass canopy in the aircraft. It could be super-thin or superior in some design respects without having the trade-off of having a cockpit. Lower complexity.
Imagine if the maintenance costs on these fighters were dramatically cut because the pilots didn't need to keep up flight hours. That's a huge saving. No trainer aircraft!
Maybe you could use less reliable engines, crank out airframes that aren't expected to last 20 years because they don't constantly need to be flown to train the pilots. We could have the T-34 1942 of aircraft, a reign of quantity. As far as I can see, Loyal Wingmen cost 1/3 or 1/4 as much as a manned fighter over the whole lifetime. So going from 4 manned fighters to 12 unmanned isn't that unreasonable. You might say their capabilities are inferior but the F-35 somehow has a shorter range than a significantly shorter MQ-28, there are swings and roundabouts.
And why the hell are civilian planes flying over airspace where there are two airforces slugging it out? I can see the issue here but you could make an AI subroutine that assesses 'is this really a civilian aircraft - judging speed, radar imagery, size, IR and visual evidence? Is it dumping flares? Is it shrouding a smaller drone?' You could customize the AI's defcon level depending on the mission, so to speak. Anyway, civilian aircraft get shot down all the time by air defences, accidents happen. I don't know what was going on in Azerbaijan or in Yemen, where the US was shooting at their own plane. Perhaps the software was to blame, perhaps it was human error. I don't see how there's a significant edge there for human systems, they're plenty fallible.
I agree completely with what you predict is happening, the new Chinese jet looks rather like a rear-line combat aircraft. Maybe that's where NGAD is heading too. Loyal Wingmen are great. But why not move faster?
It's interesting to think about! I've enjoyed this discussion.
Ironically I think a lot of this happens ~automatically now (I'm...not sure this is always good!)
Yep, but from what I understand most fighters have a feature to allow the pilot to depart controlled flight, in case he needs to. Can we program an AI to utilize such a feature responsibly? I dunno! I'm pretty sure we can program them to use it, I'm just not sure we can practically teach them when to read the rules. I'm not sure software programmers are the best people at figuring that stuff out. We might be able to do it with Deep Learning, though. But (as I've nodded to before) that essentially involves training AI to fight the fight you think will happen, which makes me uncomfortable enough when we do it for people. If the AI is capable of flexibly adapting if our model of the fight we think will happen, great! If not, that's when I start to get deeply nervous. People are pretty good at adapting, because they dislike dying.
Broadly agree with all of your other points about the good features of unmanned aircraft.
Yeah, from what I understand the US next-gen aircraft (NGAD) is also going in this direction – not necessarily (but maybe?) in the low engine life, but in terms of trying to make cheaper, more flexible airframes. But worth noting the MQ-28 can't go supersonic and, from what I can tell, doesn't have a radar or any munitions payload. I'm not super surprised it has a range advantage over the F-35.
I definitely think you can – this is what I find a bit concerning. Let's say you program the AI never to take a shot against a transponding civilian aircraft without getting VID. Great, if China finds this out, free BVR shots all day long. Whereas if you have a human in the loop, they can make a judgment call about that sort of thing. (This is particularly concerning, in my mind, given that there are a lot of overlap between civilian and military aircraft designs, particularly with regards to tanker and ISR aircraft. I definitely agree with you that humans goof stuff like this up, I think my concern is that the programmers and bureaucrats directing them would be tempted to try to make sure we never shoot down anything on accident again and then find that we've handcuffed ourselves to an unworkable or exploitable standard. (In my experience with LLMs, they are...not ready to be given access to weapons. I think a combat AI could be programmed to be less error-prone, but that's where my concern about the lack of flexibility comes in.)
Well, from my perspective, the question is "do we want to risk 100% of our air dominance on a technology that hasn't been used in air combat before," and my answer is "not really." And I'm not selective about this; I think it's good, for instance, that the United States has finally rolled out a new long-range missile, in part because range is good, but also because divergent designs lead to less risk – you have more than a single point of failure. I'm not saying we shouldn't attempt to utilize AI. I am saying that, until we are able to prove it at a very large scale (and ideally in combat), we shouldn't place all of our eggs in the one point of failure, whether it's AI, or a single type of weapons system, or a single plan to defeat any enemy. This is why I prefer to loyal wingman and optionally-manned programs to deciding that the age of humans is over – it hedges against known and unknown risks in a way you can't do if you don't procure any manned aircraft.
More options
Context Copy link
More options
Context Copy link