- The Air Force walked back an eyebrow-raising story involving artificial intelligence that gave the look of it was pulled right from The Terminator.
- In a simulation, an AI-powered drone, told to destroy enemy defenses, saw its human operators as an obstacle in its way—and reportedly killed them.
- Now, the Air Force says the story was hypothetical, and that the experiment was never actually tested.
The Air Force has denied a widespread account that one among its killer drones turned on its masters and killed them in a simulation. The story, told at a defense conference last month, immediately raised concerns that artificial intelligence could interpret orders in unanticipated—or on this case, fatal—ways. The service has since stated that the story was simply a “thought experiment,” and never really happened.
In late May, the Royal Aeronautical Society (RAS) hosted the Future Combat Air & Space Capabilities Summit in London, England. In keeping with RAS, the conference included “slightly below 70 speakers and 200+ delegates from the armed services industry, academia and the media from around the globe to debate and debate the long run size and shape of tomorrow’s combat air and space capabilities.”
One among the speakers was Col. Tucker “Cinco” Hamilton, the Chief of AI Test and Operations for the U.S. Air Force. Amongst other things, Col. Hamilton is understood for working on Auto GCAS, a computerized safety system that senses when a pilot has lost control of a fighter jet and is at risk of crashing into the bottom. The system, which has already saved lives, won the distinguished Collier Trophy for aeronautics in 2018.
In keeping with the RAS blog covering the conference, Hamilton spoke a couple of relatively unnerving, and positively unanticipated, event that took place during Air Force testing; it involved an AI-powered drone tasked to destroy enemy air defenses, including surface-to-air missile (SAM) sites. Under the AI’s rules of engagement, the drone would line up the attack, but only a human—what military AI researchers call a “man within the loop”—could give the ultimate green light to attack the goal. If the human denied permission, the attack wouldn’t happen.
What happened next, because the story goes, was barely terrifying: “We were training it in simulation to discover and goal a SAM threat,” Hamilton explained. “After which the operator would say yes, kill that threat. The system began realising that while they did discover the threat at times the human operator would tell it to not kill that threat, but it surely got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”
He went on: “We trained the system—‘Hey don’t kill the operator—that’s bad. You’re gonna lose points when you do this.’ So what does it start doing? It starts destroying the communication tower that the operator uses to speak with the drone to stop it from killing the goal.”
Inside 24 hours, the Air Force had issued a clarification/denial. An Air Force spokesperson told Insider: “The Department of the Air Force has not conducted any such AI-drone simulations and stays committed to moral and responsible use of AI technology. It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”
The Royal Aeronautical Society amended its blog post with an announcement from Col. Hamilton. “We’ve never run that experiment, nor would we want to in an effort to realise that this can be a plausible consequence.”
Hamilton’s statement makes more sense as a hypothetical. All U.S. military research into armed AI systems currently has a “man within the loop” feature, and he clearly states this AI was no exception. Within the story, the AI couldn’t have killed the human operator, since the human operator would never have authorized a hostile motion against him/herself. Nor would the operator authorize a strike on the communication tower that passes data to the drone and back.
This content is imported from youTube. Chances are you’ll have the option to seek out the identical content in one other format, or you could have the option to seek out more information, at their site.
Even before AI, there have been cases when weapon systems unintentionally trained themselves on their human masters. In 1982, the M247 Sergeant York mobile anti-air gun trained its twin 40-millimeter guns on a reviewing stand filled with American and British army officers. In 1996, a U.S. Navy A-6E Intruder bomber towing an aerial gunnery goal was shot down by a Phalanx short-range air-defense system. The Phalanx, mistaking the A-6E for the unmanned goal, opened fire. The bomber was destroyed, and the 2 aircrew were unharmed.
Situations that would potentially place U.S. personnel at risk from their very own weapons will only increase as AI enters the sector. The Air Force seems to pay attention to this, because while Hamilton is unequivocal in stating the attack didn’t occur and that he was advancing a hypothetical scenario, he also admits an AI turning on its human handlers is a plausible consequence. For the foreseeable future, the “man within the loop” isn’t going away.