AI post. Never made a top-level post before, plz let me know what I'm doing wrong.
Quote from part of the article:
one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation.
Jump in the discussion.
No email address required.
Notes -
This 'simulation' is basically a thought experiment. And frankly the whole story didn't make sense anyway.
More options
Context Copy link
Ah yes, the bit that got Lt. Col. Tucker “Cinco” Hamilton into trouble, because the way he told it made it sound like "it really happened, guys!" and then he had to later clarify that this was only a thought-experiment. Or something. Maybe it was just him and a bunch of the guys blueskying about this kind of thing over a few cans of Bud Light?:
I think I'm not going to be checking outside my windows for murderbot AI drones quite yet 😁
Very interesting, thanks. Will keep this up to give your rebuttal visibility.
More options
Context Copy link
Given how much EW is going on, you'd want to use directional transmission, no ? So locations of transmitters that whose orders it's supposed to check would be something an autonomous drone would keep track of, I believe.
More options
Context Copy link
It's the basic story from AI stop button/corrigibility in the video by Robert Miles, the hero of midwitted doomers.
More options
Context Copy link
I mean, you'd want it to know where its infrastructure is so you can train it to protect that infrastructure. That does make some sense.
More options
Context Copy link
More options
Context Copy link