AI post. Never made a top-level post before, plz let me know what I'm doing wrong.
Quote from part of the article:
one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation.
Jump in the discussion.
No email address required.
Notes -
It's the basic story from AI stop button/corrigibility in the video by Robert Miles, the hero of midwitted doomers.
More options
Context Copy link