Advances in artificial intelligence, coupled with popular movies such as Ex Machina where robots turn on their humans, have exacerbated concerns the military might lose control of armed autonomous systems. Augmented intelligence may be the answer.
While unmanned systems increasingly impact all aspects of life, it is their use as military assets that has garnered the most attention, and with that attention, growing concern.
The Department of Defense’s (DoD’s) vision for unmanned systems (UxS) is to integrate them into the joint force for a number of reasons, but especially to reduce the risk to human life, to deliver persistent surveillance over areas of interest, and to provide options to warfighters that derive from the technologies’ ability to operate autonomously. The most recent DoD “Unmanned Systems Integrated Roadmap” noted, “DoD envisions unmanned systems seamlessly operating with manned systems while gradually reducing the degree of human control and decision making required for the unmanned portion of the force structure.”1
Enhanced autonomy is an important attribute, as warfighters increasingly recognize that the current concept of operations that often involves many operators, many joysticks, and one unmanned system is not sustainable. Thus, there is growing recognition that the only way to achieve the degree of autonomy necessary to leverage the full potential of unmanned systems to support U.S. military operators is to harness artificial intelligence (AI) and machine learning.
With the prospect of military unmanned systems becoming more autonomous, concerns have surfaced regarding a potential “dark side” of having armed unmanned systems—rather than military operators—make life-or-death decisions. While DoD has issued strong guidance regarding operator control of autonomous vehicles, rapid advances in artificial intelligence and machine learning have exacerbated concerns that the military might lose control of armed autonomous systems.2 This has raised the bar regarding what DoD must do to assure the American public that the U.S. military will maintain positive control at all times.
The challenge for designers, then, is to provide the military with unmanned systems that take maximum advantage of artificial intelligence and machine learning, while providing operators with sufficient oversight and control.
Using augmented intelligence, an MQ-4C conducting a surveillance mission from San Francisco to Tokyo could be trained to send only the video of ships it encounters rather than 15 hours of mostly empty ocean, thereby greatly compressing the workload of its human operator. Northrop Grumman photo.
How Much Autonomy is Enough?
Much has been written on the need for human oversight of U.S. military unmanned systems.3 A DoD directive issued earlier this decade put it this way:
Autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force. Humans who authorize the use of, or operate these systems, must do so in accordance with the law of war, applicable treaties, weapon system safety rules and applicable rules of engagement.4
But while the U.S. commitment to not cede lethal authority to completely autonomous weapons is clear, this must be juxtaposed against capabilities potential adversaries bring to the table.5 As former Deputy Secretary of Defense Robert Work noted, “We believe, strongly, that humans should be the only ones to decide when to use lethal force. But when you’re under attack, especially at machine speeds, we want to have a machine that can protect us.”6
Other voices have questioned whether the United States can prevail in wars of the future if the requirement to have a human in the loop puts a brake on fully exploiting AI. A recent U.S. Air Force report explained: “Although humans today remain more capable than machines for many tasks, natural human capacities are becoming increasingly mismatched to the enormous data volumes, processing capabilities, and decision speeds that technologies offer or demand.”7
The imperative to fully exploit U.S. military UxS capabilities leads naturally to the desire for unmanned systems to achieve enhanced speed in decision making and allow friendly forces to act within an adversary’s OODA (observe, orient, decide, and act) loop. This means allowing unmanned systems to find the optimal solution for achieving their mission without the need to rely on constant human operator oversight, input, and decision making. But while we need unmanned systems to operate inside the enemy’s OODA loop, are we ready for them to operate without our decision making, to operate inside our OODA loops?
In an article entitled “Morals and the Machine,” The Economist addressed the issue of autonomy and humans-in-the-loop:
As they become smarter and more widespread, autonomous machines are bound to end up making life-or-death decisions in unpredictable situations, thus assuming—or at least appearing to assume—moral agency. Weapons systems currently have human operators “in the loop,” but as they grow more sophisticated, it will be possible to shift to “on the loop” operation, with machines carrying out orders autonomously.
As that happens, they will be presented with ethical dilemmas. Should a drone fire on a house where a target is known to be hiding, which may also be sheltering civilians? Should a driverless car swerve to avoid pedestrians if that means hitting other vehicles or endangering its occupants? Should a robot involved in disaster recovery tell people the truth about what is happening if that risks causing a panic? More collaboration is required between engineers, ethicists, lawyers and policymakers, all of whom would draw up very different types of rules if they were left to their own devices.8
Bill Keller addressed the issue of autonomy for military unmanned systems in a New York Times op-ed:
If you find the use of remotely piloted warrior drones troubling, imagine that the decision to kill a suspected enemy is not made by an operator in a distant control room, but by the machine itself. Imagine that an aerial robot studies the landscape below, recognizes hostile activity, calculates that there is minimal risk of collateral damage, and then, with no human in the loop, pulls the trigger.
Welcome to the future of warfare. While Americans are debating the president’s power to order assassination by drone, powerful momentum—scientific, military and commercial—is propelling us toward the day when we cede the same lethal authority to software.9
More recently, concerns about AI have come from the very industry that is most prominent in developing these technological capabilities. In a New York Times article entitled “Robot Overlords? Maybe Not,” Alex Garland of the movie Ex Machina talked about artificial intelligence and quoted several tech industry leaders:
The theoretical physicist Stephen Hawking told us that “the development of full artificial intelligence could spell the end of the human race.” Elon Musk, the chief executive of Tesla, told us that A.I. was “potentially more dangerous than nukes.” Steve Wozniak, a co-founder of Apple, told us that “computers are going to take over from humans” and that “the future is scary and very bad for people.”10
These growing concerns regarding maintaining control of unmanned systems empowered by AI pose a quandary for the U.S. military. How can it operate unmanned systems with the appropriate level of human control and oversight while still maximizing all the advantages AI brings to its unmanned military platforms and systems?
Designing in the Right Degree of Autonomy
In a recent address at a military-industry symposium, Peter Singer, author of Wired for War, suggested one way to understand how the U.S. military might cope with the conundrum of fielding unmanned military systems that maximize the advantages of AI while still maintaining sufficient operator control. He suggested, “What is playing out in driverless cars is also playing out in military UxS. You will never be able to ‘engineer out’ all of the ethical dilemmas surrounding the use of military UxS.”11
As Dr. Singer suggests, those responsible for building and fielding unmanned systems with artificial intelligence might be well-served to look to the automobile industry for best practices. It is here they may find the vital customer feedback that indicates what drivers really want. And while not a perfect match, this taxonomy can suggest the best way to marry AI with unmanned military systems.
Automobiles are being conceived, designed, built, and delivered with increasing degrees of artificial intelligence. It is worth examining where these trend lines are going. Automobiles can be broken into three basic categories:
• A completely manual car—something your parents drove
• A driverless car that takes you where you want to go using artificial intelligence
• A car with augmented intelligence
The initial enthusiasm for driverless cars has given way to second thoughts regarding how much a driver may be willing to be taken out of the loop. An article in the New York Times, “Whose Life Should Your Car Save?” captures the public’s concern with driverless cars and, by extension, with other fully autonomous systems:
We presented people with hypothetical situations that forced them to choose between “self-protective” autonomous cars that protected their passengers at all costs, and “utilitarian” autonomous cars that impartially minimized overall casualties, even if it meant harming their passengers. (Our vignettes featured stark, either-or choices between saving one group of people and killing another, but the same basic trade-offs hold in more realistic situations involving gradations of risk.)
A large majority of our respondents agreed that cars that impartially minimized overall casualties were more ethical, and were the type they would like to see on the road. But most people also indicated that they would refuse to purchase such a car, expressing a strong preference for buying the self-protective one. In other words, people refused to buy the car they found to be more ethical.15
As an increasing number of studies and reports indicate, there is growing consensus among consumers that drivers want to be “in the loop” and that they want semi- and not fully autonomous cars. This trend should inform how we think about military autonomous and semiautonomous systems.16
Extrapolating this example to military unmanned systems, the available evidence suggests that warfighters want augmented intelligence in their unmanned systems. That will make these machines more useful and allow warfighters to control them in a manner that will go a long way toward resolving many of the moral and ethical concerns related to their use.
But this begs the question—what would augmented intelligence look like to the military operator. What tasks does the warfighter want the unmanned systems to perform as they leverage artificial intelligence to provide augmented intelligence? How can we enable the Soldier, Sailor, Airman, or Marine in the fight to make the right decision quickly in stressful situations where mission accomplishment must be balanced against unintended consequences?
Augmented Intelligence
Consider an unmanned system conducting a surveillance mission. Today, an operator receives streaming video of what the unmanned system sees, and in the case of unmanned aerial systems, often in real time. But this requires the operator to stare at this video for hours on end (the endurance of the U.S. Navy’s MQ-4C Triton, for example, is 30 hours). This concept of operations is an enormous drain on human resources, often with little to show for the effort.17
Using basic augmented intelligence techniques, a Triton can be trained to deliver only what is useful to its human partner. For example, an MQ-4C operating at cruise speed flying between San Francisco and Tokyo would cover the 5,000-plus miles in approximately 15 hours. Rather than sending 15 hours of generally uninteresting video of mostly empty ocean, the Triton could be trained to send only the video of each ship it encounters, thereby greatly compressing human workload.
Taken to the next level, the Triton could leverage AI to do its own analysis of each contact to flag it for possible interest. For example, if a vessel is operating in a known shipping lane, has filed a journey plan with the proper maritime authorities, and is providing an Automatic Identification System signal, it is likely worthy of only passing attention by the operator, and the Triton will flag it accordingly. If, however, it does not meet these criteria (say, for example, the vessel makes an abrupt course change that takes it well outside normal shipping channels), the operator would be alerted immediately.
For lethal military unmanned systems, the bar is higher for what the operator must know before authorizing the unmanned warfighting partner to fire a weapon or—as is often the case—recommending that higher authority authorize lethal action. Take the case of military operators managing an ongoing series of unmanned aerial systems flights that have been watching a terrorist and waiting for higher authority to give the authorization to take out the threat using an air-to-surface missile fired from that unmanned aerial system. Using augmented intelligence, the operator can train the unmanned system to anticipate what questions higher authority will ask prior to giving the authorization to fire and provide, if not a point solution, at least a percentage probability or confidence level to questions such as:
• What is level of confidence this person is the intended target?
• What is this confidence based on? Facial recognition, voice recognition, pattern of behavior, association with certain individuals, proximity of family members, proximity of cohorts?
• What is the potential for collateral damage to family members, known cohorts, unknown persons?
• What is the potential impact of waiting versus striking now?
These considerations represent only a subset of the kind of issues operators must train their armed unmanned systems to deal with. Far from ceding lethal authority to unmanned systems, having these assets provide augmented intelligence frees the human operator from having to make real time—and often on-the-fly—decisions in the stress of combat. Designing this kind of augmented intelligence into unmanned systems from the outset ultimately will enable them to be more effective partners for their military operators.
Into the Future with AI
The United States must harness emerging technologies such as artificial intelligence and machine learning to maintain an edge over potential adversaries. At the same time, the bedrock moral and ethical principles that undergird U.S. national identity are unlikely to lead to the military operating “Terminator-like” autonomous weapons against an enemy.
Harnessing rapid advances in AI and machine learning to provide warfighters operating unmanned systems with augmented intelligence will give them the ability to make better decisions faster with fewer people and fewer mistakes under conditions of stress and uncertainty. Leveraging AI and machine learning in this way will give our forces the decisive advantage in combat.
1. “FY 2013-2038 Unmanned Systems Integrated Roadmap” (Washington, DC: Department of Defense, 2013).
2. Some of these concerns emerge from popular culture, especially books and movies where “our” robots turn on us. While there are numerous movies where “bad” robots try to destroy mankind, public sentiment and concerns stem primarily from movies such as 2001: A Space Odyssey (1968) and Ex Machina (2015), where seemingly “good” robots turn on their human masters.
3. See, for example, George Galdorisi and Rachel Volner, “Keeping Humans in the Loop,” U.S. Naval Institute Proceedings 141, no. 2 (February 2015); George Galdorisi, “Designing Autonomous Systems for Warfighters,” Small Wars Journal, August 2016; Phillip Pournelle, “Trust Autonomous Machines, U.S. Naval Institute Proceedings 143, no. 6 (June 2017); and Jeffrey Stiles, “Drone Wars are Coming,” U.S. Naval Institute Proceedings 143, no. 7 (July 2017).
4. Deputy Secretary of Defense Ashton Carter, “Autonomy in Weapon Systems,” memorandum dated 21 November 2012. See also, “Carter: Human Input Required for Autonomous Weapon Systems,” Inside the Pentagon, 29 November 2012, for a detailed analysis of the import of this memo.
5. U.S. military planners are acutely aware of the rapid strides potential adversaries are making in artificial intelligence and how AI might give them a technological edge over U.S. forces. See, for example, Elsa Kania, “Battlefield Singularity: Artificial Intelligence, Military Revolution, and China’s Future Military Power” (Washington, DC: Center for a New American Security, November 2017). See also, Vincent Boulanin and Maaike Verbruggen, “Mapping the Development of Autonomy in Weapons Systems” (Stockholm, Sweden, Stockholm International Peace Research Institute, November 2017) for an international perspective regarding the rapid rise in autonomy in lethal weapons systems.
6. Remarks by Deputy Secretary of Defense Robert Work at the Center for New American Security Defense Forum, 14 December 2015.
7. “Technology Horizons: A Vision for Air Force Science and Technology 2010-2030.”
8. “Flight of the Drones: Why the Future of Air Power Belongs to Unmanned Systems,” The Economist, 8 October 2011.
9. Bill Keller, “Smart Drones,” The New York Times, 10 March 2013.
10. Alex Garland, “Alex Garland of ‘Ex Machina’ Talks About Artificial Intelligence,” The New York Times, 22 April 2015.
11. Dr. Peter Singer, address to the AFCEA C4ISR Symposium, San Diego, CA, 27 April 2017.
12. Quoted in Lolita Baldor, “Military Wants to Fly More Sophisticated Drones,” Associated Press, 4 November 2010. General Breedlove’s statement has been echoed repeatedly, with U.S. Air Force officials noting they do not have enough operators to field all the UASs in the Air Force inventory.
13. “Why ‘Unmanned Systems’ Don’t Shrink Manpower Needs,” Armed Forces Journal, 1 October 2011.
14. Michael Fowler, “The Future of Unmanned Aerial Systems,” in Global Security and Intelligence Studies, vol. 1, no. 1, digitalcommons.apus.edu/gsis/vol1/iss1/3.
15. Azim Shariff, Iyad Rahwan, and Jean-Francois Bonnefon, “Whose Life Should Your Car Save?” The New York Times, 6 November 2016. See also Aaron Kessler, “Riding Down the Highway, with Tesla’s Code at the Wheel,” The New York Times, 15 October 2015.
16. The 12 November 2017 New York Times Magazine was devoted to the issue of driverless cars, sporting the cover title, “Life After Driving.” That optimistic lead-in was followed by a spate of articles raising a wide-range of concerns regarding how a future with driverless cars would play out.
17. A former vice chairman of the Joint Chiefs of Staff complained that a single Air Force Predator can collect enough video in one day to occupy 19 analysts, noting, “Today an analyst sits there and stares at Death TV for hours on end, trying to find the single target or see something move. It’s just a waste of manpower.” Ellen Nakashima and Craig Whitlock, “Air Force’s New Tool: ‘We Can See Everything,’” Washington Post, 2 January 2011.
Captain Galdorisi is a career naval aviator who began his writing career in 1978 with an article in Proceedings. He has written 13 books, including the best-seller Tom Clancy Presents: Act of Valor, the novelization of the Bandito Brothers/Relativity Media film, as well as the Naval Institute Press book, The Kissing Sailor, which proved the identity of the two principals in Alfred Eisenstaedt’s famous photograph. His latest projects include a reboot of the Tom Clancy Op-Center series, as well as a series of Rick Holden military thrillers, starting with The Coronado Conspiracy.