ANALYSIS โ One of the wildest stories I've read recently was about an Air Force officer who claimed a drone powered by Artificial Intelligence (AI) turned on its own real-world pilot in a combat simulation, launching a simulated strike against him on the ground.
As described, the story,ย reported byย Business Insider, seemed incredibly plausible.ย Very Terminator-like. And very scary.
According to the original story told at a Royal Aeronautical Society (RAS) conference in London in May, the AI powering the drone in the U.S. Air Force simulation was commanded to destroy enemy defenses and surface-to-air missile (SAM) sites.
Unfortunately, using its internal logic, and the rules it was guided by, the AI came to see its human operators as an obstacle to that objective โ since the human pilots might prohibit the drone from completing certain missions โ so the AI drone simply killed them.
The Air Force now says that the story was simply a โthought experiment,โ and never really happened.ย
I'm not so sure.
And, in any case, the scenario is extremely plausible and makes total sense. If we aren't careful, it might become a reality. This is why we must always maintain โa man in the loop.โ
The conference speaker who told the story was Col. Tucker โCincoโ Hamilton, the Chief of AI Test and Operations for the U.S. Air Force, so his public ruminations were taken very seriously.
Accordingย to Popular Mechanics, referencing the RAS conference blog:
Under the AI's rules of engagement, the drone would line up the attack, but only a humanโwhat military AI researchers call a โman in the loopโโcould give the final green light to attack the target. If the human denied permission, the attack would not take place.
What happened next, as the story goes, was slightly terrifying: โWe were training it in simulation to identify and target a SAM threat,โ Hamilton explained. โAnd then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.โ
But even when the humans changed the AI rules and told the AI that killing its human operators was wrong, the quick-thinking AI found a novel way to sabotage the humans anyways โ without killing them.ย
As PopMechย noted, Col. Hamilton went on:
We trained the systemโโHey don't kill the operatorโthat's bad. You're gonna lose points if you do that.' So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.
So, the AI turned against its own side regardless.
Within 24 hours of this becoming news, the Air Force issued a statement denying the simulation had ever occurred.
The Royal Aeronautical Society also amendedย its blog postย with a statement from Col. Hamilton saying the same thing but also adding a caveat: โWe've never run that experiment, nor would we need to in order to realize that this is a plausible outcome.โ
Whether or not the simulation was actually carried out as described, the scenario described by Hamiltonย shows just how AI-enabled technology can realistically behave in unpredictable and very dangerous ways.ย
It's also why with AI we must always maintain โa man in the loop.โ
The opinions expressed in this article are those of the author and do not necessarily reflect the positions ofย American Liberty News.
READ NEXT:
2 Comments
Outer Limits becomes Reality?? Scary
It’s time (or is it already too late?) to implement Asimov’s 3 rules of robotics in the most basic programming of all AIs, and supplementing them with blocks to the workarounds described in this article. We can’t stop advancing development of AIs, but we have to prevent Skynet arising.