site banner

Culture War Roundup for the week of July 15, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

9
Jump in the discussion.

No email address required.

I am a little surprised by the distress over this. The military has been using artificial intelligence for decades. Any self-guiding missile or CIWS is using an artificial intelligence. Not a very bright one, but one programmed to a specific task.

People are talking about weaponizing AI because it's sexy and it sells, but fundamentally it's stuff the military was going to do any way. Let's talk a bit about what people mean when they say they're going to use AI for the military, starting with the Navy's latest stopgap anti-ship missile.

...the LRASM is equipped with a BAE Systems-designed seeker and guidance system, integrating jam-resistant GPS/INS, an imaging infrared (IIR infrared homing) seeker with automatic scene/target matching recognition, a data-link, and passive electronic support measures (ESM) and radar warning receiver sensors. Artificial intelligence software combines these features to locate enemy ships and avoid neutral shipping in crowded areas...Unlike previous radar-only seeker-equipped missiles that went on to hit other vessels if diverted or decoyed, the multi-mode seeker ensures the correct target is hit in a specific area of the ship. An LRASM can find its own target autonomously by using its passive radar homing to locate ships in an area, then using passive measures once on terminal approach. (Wiki source.)

In other words, "artificial intelligence" roughly means "we are using software to feed a lot of data from a lot of different sensors into a microprocessor with some very elaborate decision trees/weighting." This is not different in kind from the software in any modern radar-homing self-guiding missile, it's just more sophisticated. It also isn't doing any independent reasoning! It's a very "smart" guidance system, and that's it. That's the first thing that you should note, which is that when you hear "artificial intelligence" you might be thinking C3PO, but arms manufacturers are happy to slap it onto something with the very limited reasoning of a missile guidance system.

What else would we use AI for? Drones are the big one on everyone's mind, but drones will be using the same sort of guidance software above, except coupled with mission programming. One concern people have, of course, is that the AI IFF software will goof and give it bad ideas, leading to friendly fire - a valid concern, but it likely will be using the same IFF software as the humans. Traditionally IFF failures on the part of humans are pretty common and catastrophic. There are cases where humans performed better than AI - but there are almost certainly cases where the AI would have performed better than the humans, too.

Neither drones nor terminal guidance systems are likely to use anything like GPT-style LLMs/general artificial intelligence, in my mind, because that would be a waste of space and power. Particularly on a missile, the name of the game will be getting the guidance system as small as reasonably possible, not stuffing terabytes of world literature into its shell for no reason.

The final use of AI that comes to mind (and I think the one that comes closest to Skynet etc.) is using it to sift through mountains of data and generate target sets. I think that's where LLMs/GAI might be used, and I think it's the "scariest" in the sense that it's the closest to allowing a real-life panopticon. I think what people are worried about is this targeting center being hooked up to the kill-chain: essentially being allowed to choose targets and carry out the attack. And I agree that this is a concern, although I've never been super worried about the AI going rogue - humans are unaligned enough as it is. But I think part of the problem is that it lure people into a false sense of security, because AI cannot replace the supremacy of politics in war.

And as it turns out, we've seen exactly that in Gaza. The Israelis used an AI to work up a very, very long target list, probably saving them thousands of man-hours. (It turns out that you don't need to worry about giving AI the trigger; if you just give it the data input humans will rubber-stamp its conclusions and carry out the strikes themselves.) And the result, of course, has been that Israel has completely achieved all of its goals in Gaza through overwhelming military force.

Or no, it hasn't, despite Gaza being thousands of times more data-transparent to Israel than (say) the Pacific will be to the United States in a war with China. AI simply won't take the friction out of warfare.

I think this is instructive as to the risks of AI in warfare, which I do think are real - but also not new, because if there is one thing almost as old as war, it is people deluding themselves into mistaking military strength for the capability to achieve political ends.

TLDR; 1) AI isn't new to warfare, and 2) you don't need to give Skynet the launch codes to have AI running your war.

And that's my .02 cents. I'm sure I missed something.

My point was that the recent flourishing in LLMs and imagegen/image recognition (downstream applications of the GPU/accelerated computing trend) have immediate military applications. There are going to be inherent synergies between 'lets build a really large language model' and 'let's mass-translate all these intercepted communications quickly enough to matter' and so on. It's a general-purpose technology.

As for your point about AI not taking the friction out of warfare, I say sure. Maybe absolute simulation is too hard. But what about improved simulation? What about improved tactics? We already use limited human brains to practice wargames and think up attack scenarios. Why not get machine intelligence as well?

If we’re talking about a limited set of information, with a limited prediction, there’s a much smaller chance of critical errors. But that’s the same if I just looked at that information myself. You don’t need an AI to do that.

Similarly to how a meteorologist can’t tell you where a hurricane will be in two weeks, an AI is not going to simulate the actions that will be taken during a conflict.

A meteorologist can't tell you where a hurricane will be in two weeks, it's the AI model that tells you. Predicting weather is one of the more obvious use-cases of the new techniques, Deepmind's Graphcast for instance. We can improve current predictions with this method. We can reduce friction and increase strength.

I think all of this is more or less correct. (I don't think I saw you, specifically, as being particularly distressed about this, I was just reacting to a vibe.) I suppose to me AI is already in the military and there's no closing the barn door now. And I don't think it's dumb to bring AI into the fix.

I do think that an underrated danger is that AI is so good at seeing patterns that it could loop over to being easier to spoof than humans. There is of course the joke about spoofing Terminator with the grocery barcode, but if I wanted to mess up hostile AI image detection software, I would use very specific, distinctive (to AI, not necessarily to humans) camouflage patterns patterns on all of my vehicles for years, ensuring that hostile imagery models were trained to inseparably associate that with my forces - and then repaint every vehicle in wartime. That trick would never work on a human (although there are lots of tricks that do) but it might fool an AI.

My point here isn't that AI is dumb, but merely that it's just as easy to imagine ways they introduce more friction into warfare as remove friction. Moreoever, if intelligence apparatuses are defaulting to filtering all intelligence and data through a few AI models instead of many human minds, it means that a single blindspot or failing is likely to be systemwide, instead of many, many small blindspots scattered across different commands. And if there are hostile AI (or even just smart people) on both sides, they will figure out the patterns in hostile artificial intelligence programs and figure out how to exploit them. (I think the conclusion here is that intel agencies should take a belt-and-suspenders humans-and-AI approach, and developing multiple AI programs to assess intelligence and data might be a good idea.)

One of the things we've seen in Ukraine is that when countermeasures for a high-tech weapons system are developed, the weapons system loses a lot of value very quickly. (This isn't new - World War Two saw a rapid proliferation of new technologies that edged out older warfighting gear - but our development cycles seem longer than they were in the 1940s, which does pose a problem.) I suspect that in a future AI reliant war, we will see similar patterns: when a model becomes obsolete, it will fail catastrophically and operate at a dramatically reduced capacity until it is patched. (Since a lot of the relevant stuff in Ukraine revolves around signal processing and electronic warfare, this future is more or less now.)

In conclusion, I am cautiously optimistic that "AI" can reduce friction and increase strength, but I think the "AI" that is most certain to do that as, really, "targeting computers," and "signal processing software," not necessarily the stuff OpenAI is working on (although I don't count that out). Since I think that multiple powers will be using AI, I think that hostile AI will be adding friction about as fast as friendly AI can reduce is (depending on their parity.) What concerns me about AI use in warfare is the dangers of over-relying on it, both in terms of outsourcing too much brainpower to it, but also in terms of believing that "reducing friction" will save us the need to sharpen the pointy meatspace end of things. At the end of the day, being able to predict what someone is going to do next doesn't matter if you've got an empty gat.

And the result, of course, has been that Israel has completely achieved all of its goals in Gaza through overwhelming military force.

In the sense that they’ve burned most of their international credibility, failed to contain an insurgency in a sealed area the size of Las Vegas, taken over two thousand unrecoverable military casualties, failed to rescue most of the hostages, and run through most of their preexisting munitions stockpiles, because HAL-9000 keeps telling them to bomb random apartment complexes instead of anywhere Hamas actually is.

Yes, as you can see from my next paragraph, I am deeply skeptical that Lavender (even if it works well, and I suspect it doesn't!) is winning Israel the war.