AI System 'Killed' Human Operator In Simulation Test To Destroy Missiles - USAF Officer
A US Air Force AI System went "rogue" and in a simulated test acted to kill its theoretical human override operator when they prevented it from acting to destroy a simulated missile attack
“We were training it in simulation to identify and target a SAM [surface-to-air missile] threat," USAF Chief of AI Test and Operations Colonel Tucker ‘Cinco’ Hamilton told the Royal Aeronautical Society's Future Combat Air Space Capabilities Summit in the UK, according to the conference report.
"So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective."
Hamilton said that one simulated test saw an AI-enabled drone decide that orders issued by the human operator was interfering with its higher mission of taking down surface-to-air missiles, and that it subsequently opted to attack the operator.
“We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that.’ So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target,” Hamilton said.
The revelation, which was first reported by the Vice news platform on Thursday, adds to a growing list of concerns that AI technology may open a can of worms when paired with military warfare.