The effectiveness of drone warfare by the U.S. military in the post-9/11 era is indisputable. Reaper unmanned aerial vehicles (UAVs) armed with Hellfire missiles have taken out thousands of terrorist targets with minimal risk to our military personnel. The advent of drone warfare came with problems, however. By 2012, the United States realized drone strikes against terrorists that sometimes resulted in the deaths of innocent bystanders often served to radicalize new terrorists and undermine our moral high ground.
The United States began using armed drones largely as part of the counterterrorism fight in Iraq and Afghanistan, and their use spread to Pakistan, Yemen, Somalia and Libya. Today, Reapers and Hellfires are joined by other UAVs as well as undersea, on-the-sea, and land-roving autonomous vehicles with applications beyond counterterrorism. The technology is advancing quickly, as Commander Jeremy Vaughan pointed out in his article “Foreign Drones Complicate Maritime Air Defense” in the April Proceedings. Unfortunately, we are not the only nation developing these technologies, and some of our adversaries do not have the same moral constraints and values as the United States and our allies. We must be prepared to deal with the moral and legal dilemmas associated with unmanned weapon systems as new technologies emerge. And we must be ready to respond to the unmanned threats posed by our adversaries.
For now, the U.S. military keeps a person in control of every drone—a “man in the loop.” As technology advances, though, we must ask how we will incorporate artificial intelligence (AI) to make split-second decisions about the use of lethal force. Should an autonomous patrol boat be able to engage a target it determines hostile? Or should that decision be left to a human? Is it ethical to put the responsibility on AI to shorten reaction time and neutralize a threat coming at high speed? What will we do if our adversaries develop autonomous vehicles with AI that do not care about collateral damage? In the age of AI, drones, and hybrid warfare, we must maintain a decision-speed advantage, but what if keeping a person in the loop negates that advantage? How will we deal with adversary hunter-killer drones, or swarms of them, if our adversaries do not have the same regard for the law of armed conflict and collateral damage as we do? In the past year, Russia has bombed hospitals in Syria with an apparent disregard for collateral damage. If our adversaries develop AI that can find, fix, and finish a target within seconds, while we insist on keeping a person in our kill-chain decision-making process, we may find ourselves at a competitive disadvantage. By the time we observe, orient, decide, and act, there is the potential for an incident like the successful terrorist attack on the USS Cole (DDG-67).
In a counterterrorism policy speech at the National Defense University in May 2013, then-President Barack Obama issued new guidance on when and how drone strikes would be used: only if a target posed a continuing threat, the target was almost certainly present, there was no chance for capturing the target, and collateral damage was minimized. The President was right to impose those restrictions, and the U.S. military must be held to a high standard and strive to minimize collateral damage. Russian irregular warfare in Ukraine and Chinese hybrid tactics in the South China Sea, however, demonstrate our national interests are threatened not only by terrorist organizations. As drone technology with AI capabilities proliferates into the hands of state and non-state adversaries, we will be challenged to uphold our high moral standard and maintain a decision speed advantage. This reality was recognized in the “Cooperative Strategy for 21st Century Sea Power” ten years ago! “Conflicts are increasingly characterized by a hybrid blend of traditional and irregular tactics, decentralized planning and execution, and non-state actors, using both simple and sophisticated technologies in innovative ways.”
The blurring of conventional, irregular, and hybrid warfare, cyber threats, AI, and unmanned vehicles requires innovation in our tactics, strategies, technology, and weapons. How do we balance collateral damage, decision speed, and the need for man-in-the-loop control? As I finish my plebe year at the Naval Academy, I am excited by these challenges but also concerned by the immense complexities we must sort out.