In the Western Pacific, China is building up its anti-access/area-denial (A2/AD) capabilities—including communications jamming, cyber-warfare, and antisatellite weapons. In the event of a conflict, Chinese forces are likely to attack those vital communication links that enable U.S. forces to operate cohesively. In those communications-degraded/-denied environments, unless a system is manned, autonomy might be the only way to go.
For the U.S. Navy, there is an added dimension—as was postulated by Jan van Tol, Mark Gunzinger, Andrew Krepinevich, and Jim Thomas at the Center for Strategic and Budgetary Assessments. The service’s aircraft carriers no longer have a safe haven in coastal waters 200 nautical miles off shore.1 With the rising threats to the carrier in the form of antiship cruise and ballistic missiles, those ships may be forced to stand off a significant distance—more than 1,000 nautical miles—from the enemy shoreline.
In addition, with the proliferation of advanced integrated air-defense networks and low-frequency radars that can detect and track low-observable targets, existing stealth aircraft may not have the range or the survivability to operate in those theaters.2
‘The Best Option’
In that case, the best option for the Navy might be to develop a long-range unmanned strike aircraft with wide-band all-aspect stealth technology that would be able to persist inside even the densest of enemy air defenses. By necessity, given that such an advanced adversary would be able to deny or degrade communications significantly, such an aircraft would have to be fully autonomous. In other words, the unmanned aircraft would have to be able to operate independently of prolonged communications with its human masters, and it would also need to be able to make the decision to release weapons without phoning home for a human operator’s consent.
Moreover, the U.S. Air Force also faces basing challenges in the Western Pacific, as existing air bases such as Kadena and Misawa in Japan and Andersen Air Force Base in Guam are vulnerable to concerted air and missile attacks.3 A very stealthy long-range autonomous unmanned strike aircraft could be used to complement the service’s prospective long-range strike bomber—going into places that are far too dangerous for a manned aircraft or to perform missions like stand-in jamming from inside hostile territory.
While the initial cost of developing such an autonomous unmanned aircraft might be high, there could be significant savings longer-term. An autonomous unmanned aircraft only needs to be flown occasionally during peacetime to keep up the proficiency of maintainers. Further, an autonomous aircraft has no need to fly training sorties or to practice—a computer can simply be programmed to do what needs to be done.
Such an autonomous unmanned aircraft also would not need downtime between deployments—just the occasional depot-level maintenance overhaul. That means that the Navy—or the Air Force if it bought some—would need only as many aircraft as required to fill the number of deployed carriers and account for attrition reserves and planes laid up in depot maintenance. There could also be significant personnel cost savings because a fully autonomous aircraft would not require pilots, and the smaller fleet would require fewer maintainers.
We Have the Technology
The technology to develop and build such an aircraft mostly already exists. Most current unmanned aircraft like the General Atomics Aeronautical Systems MQ-1 Predator and MQ-9 Reaper are remotely controlled by a human operator. Others—such as the Northrop Grumman MQ-4C Triton or RQ-4B Global Hawk—have far more autonomy but are not armed. Nonetheless, a number of autonomous weapon systems are already either in service or have reached the prototype stage that can engage hostile targets without human intervention.
Perhaps the two most obvious examples are cruise missiles and intercontinental ballistic missiles. Once those weapons are launched, they proceed autonomously to their preprogrammed targets—without human intervention.
If one were to imagine a U.S. Navy destroyer launching a Tomahawk cruise missile at a fixed target somewhere in the Western Pacific, a sequence of events would be followed. The crew of the destroyer would receive orders to attack a particular target. The crew would then program that information into the missile. Once launched, the Tomahawk navigates its way to the target in a manner similar to a manned aircraft—but completely without human intervention.
Against a fixed target—a bunker or factory, for example—a fully autonomous unmanned aerial vehicle (UAV) would be very similar to a cruise missile. Like a Tomahawk, the UAV would receive a particular target location and instructions for how to engage that target with the correct weapons. Also like the Tomahawk, the UAV would be able to navigate to that target completely autonomously. If the UAV were then to engage that fixed target with a joint direct-attack munition (JDAM) or some other weapon, there is no real difference in practical terms between an unmanned aircraft and a cruise missile. The effect would be identical. The only change would be that the UAV could make a second pass, fly on to another target, or fly home to be rearmed. And it could be argued that with its jet engine and wings, a Tomahawk is really just a small UAV on a one-way trip.
The more challenging scenario comes when there is an unexpected “pop-up” threat such as an S-400 surface-to-air missile battery that might be encountered by an autonomous unmanned combat air vehicle (UCAV) during a wartime sortie. Human pilots are assumed to inherently have the judgment to decide whether or not to engage such a threat. But those human pilots are making their decisions based on sensor information being processed by the aircraft’s computer. In fact, the pilot is often entirely dependent on the aircraft’s sensors and the avionics to perform a combat identification of a contact.
The Lockheed Martin F-22 Raptor and F-35 Joint Strike Fighter epitomize this concept—especially in the realm of beyond-visual-range air-to-air combat. Both the F-22 and the F-35 fuse data correlated from the aircraft’s radar, electronic support measures, and other sensors into a track file that the computer identifies as hostile, friendly, or unknown. The pilot is entirely reliant on the computer to determine a proper combat identification. It would be a very small technological step for the system to engage targets autonomously—without human intervention.
The air-to-ground arena is somewhat more challenging because of target-location errors that are inherent in sensors and navigation systems (and also environmental effects and enemy camouflage). But with a combination of electro-optical/infrared cameras, synthetic aperture radar (SAR), ground moving-target indication (GMTI) radar or even hyperspectral sensors, a computer can ascertain a positive combat identification of ground targets—assuming that the data being gathered are geo-registered. Once the computer can determine a positive identification, either a manned or unmanned aircraft can engage a target. But ultimately, the computer is still making the determination that a contact is hostile.
‘Zero in for the Kill’
In fact, autonomous systems capable of identifying and attacking targets at their own discretion have existed in the past. One example is the Northrop AGM-136 Tacit Rainbow antiradiation cruise missile that was canceled in 1991. The weapon was designed to be preprogrammed for a designated target area over which it would loiter. It would remain in that designated box until it detected emissions from a hostile radar. Once the Tacit Rainbow detected and identified an enemy emitter, the missile would zero in for the kill—all without human intervention.
A later example is the Lockheed Martin low-cost autonomous attack system (LOCASS). The now-defunct miniature loitering cruise missile demonstrator was guided by GPS/INS to a target box. It would then use a laser radar to illuminate targets and match them with preloaded signatures. The weapon would then go after the highest-priority target while at the same time selecting the appropriate mode for the warhead to best engage the target autonomously—without human intervention.
Other prominent examples include the Aegis combat system, which in its full automatic mode can engage multiple aircraft or missiles simultaneously—without human intervention. Similarly, the shipboard close-in weapon system (CIWS, or Phalanx) has an autonomous-engagement capability.
What all of that means is that fully autonomous combat identification and engagement are technically feasible for unmanned aircraft—given sophisticated avionics and smart precision-guided weapons. But while feasible, what of the moral and legal implications?
Moral and Legal
The Pentagon already preemptively issued policy guidance on the development and operational use of autonomous and semi-autonomous weapons in November 2012. DOD directive 3000.09 states: “Autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”4 But the policy does not expressly forbid the development of fully autonomous lethal weapon systems, it merely states that senior DOD leadership would closely supervise any such development.5
To prevent what the DOD calls an “unintended engagement,” those who authorize or direct the operation of autonomous and semi-autonomous weapon systems are required to use “appropriate care and in accordance with the law of war, applicable treaties, weapon system safety rules, and applicable rules of engagement (ROE).”6
Thus it would seem that the U.S. government views the use of autonomous weapon systems as legal under the laws of war—provided certain conditions are met. A number of lawyers specializing in national-security law have suggested that fully autonomous weapons are lawful. The responsibility for the use of such a weapon would ultimately fall to the person who authorized its employment—which is similar to any other manned weapon.
But there are those who are adamantly opposed to any fully autonomous weapon systems—organizations such as Human Rights Watch (HRW). HRW has called for an international treaty that would preemptively ban all autonomous weapons in a November 2012 report titled “Losing Humanity: The Case against Killer Robots.”7 In fact, it seems likely that the DOD policy guidance on the development of autonomous weapons stems from the conclusions of the HRW report.
The report makes three recommendations.8 The first is to “Prohibit the development, production, and use of fully autonomous weapons through an international legally binding instrument.” The second is to “Adopt national laws and policies to prohibit the development, production, and use of fully autonomous weapons.” And the third is to “Commence reviews of technologies and components that could lead to fully autonomous weapons. These reviews should take place at the very beginning of the development process and continue throughout the development and testing phases.”9
HRW asserts that autonomous systems are unable to meet the standards set forth under international humanitarian law. “The rules of distinction, proportionality, and military necessity are especially important tools for protecting civilians from the effects of war, and fully autonomous weapons would not be able to abide by those rules,” the report states.10
Machines Are Better than Humans
But critics such as legal scholar Benjamin Wittes at the Brookings Institution have challenged such statements. Wittes has written that there are situations where machines can “distinguish military targets far better and more accurately than humans can.” Those familiar with unmanned technology, sensor hardware, and software can attest that is indeed the case.11
If a computer is given a certain set of parameters—for example, a series of rules of engagement—it will follow those instructions precisely. If the autonomous weapon is designed and built to operate within the laws of war, then there should be no objection to their use.12 Under Article 36 of the 1977 Additional Protocol to the Geneva Conventions, weapons cannot be inherently indiscriminate and are prohibited from causing unnecessary suffering or superfluous injury. “The fact that an autonomous weapon system selects the target or undertakes the attack does not violate the rule,” write Hoover Institution legal scholars Kenneth Anderson and Matthew Waxman in a paper titled “Law and Ethics For Autonomous Weapon Systems.”13
Technology is continuously moving forward, and while autonomous systems may not be able to operate under all circumstances, it may only be a matter of time before engineers find a technical solution. Under many circumstances—with the right sensors and algorithms—autonomous systems would have the ability to distinguish lawful targets from unlawful ones, but that is not currently the case under all circumstances. Thus there are some limitations inherent to autonomous weapon systems for the time being.
A Matter of Proportion
However, those will not always be there, as technology continues to advance and engineers make progress. As Wittes points out correctly, “To call for a per se ban on autonomous weapons is to insist as a matter of IHL [international humanitarian law] on preserving a minimum level of human error in targeting.”14 Machines are generally far more precise than human beings.
Along with being able to distinguish between targets, the law requires that combatants weigh the proportionality of their actions.15 “Any use of a weapon must also involve evaluation that sets the anticipated military advantage to be gained against the anticipated civilian harm,” Anderson and Waxman write. “The harm to civilians must not be excessive relative to the expected military gain.”
Though technically challenging, a completely autonomous weapon system would have to be required to address proportionality as well as distinction. But the difficulty is entirely dependent on the specific operational scenario. For example, while an unmanned aircraft could identify and attack a hostile surface-to-air missile system deep behind enemy lines or an enemy warship at sea—where there is little chance of encountering civilians—targets inside a highly populated area are more difficult to prosecute.
Some of the most problematic scenarios—which would not necessarily be factors in a high-end campaign against an A2/AD threat—would be challenging for a human pilot, let alone a machine. For example, during a counter-insurgency campaign, if two school buses were driving side-by-side in a built-up area, but one of the vehicles was carrying nuns and the other carrying heavily armed terrorists, it would be very difficult for a human pilot to determine which bus is the proper target until one of them commits a hostile act. The same would be true for an autonomous system, but in the near term it could be a technological challenge.
The human pilot would also have to determine what kind of weapon to use—judging the proportionality. Does he or she select a 2,000-pound JDAM, a smaller 250-pound small-diameter bomb, a 20-mm cannon, or do nothing since the risk of civilian casualties is too high? Likewise, an autonomous weapon system would need to be programmed to select an appropriate low-collateral-damage munition or to disengage if the danger of civilian casualties is too great once the target has been positively identified. But it would take time and investment before such an autonomous system could become a reality.
Playing Fair?
Thus, for the near future, autonomous weapons would have to be developed incrementally, starting with systems that could engage fixed targets and “obviously” military targets like surface-to-air missile sites or tank columns on the open battlefield during a conventional war. Likewise, in the maritime environment, where there are few civilians to speak of, autonomous systems could offer huge advantages with little in the way of drawbacks.
For the time being, autonomous weapons should not be used in complex scenarios such as counterinsurgency, where there is significant possibility that it could cause inadvertent civilian casualties or unintended collateral damage. It may also be unwise to use a fully autonomous UCAV for missions like close-air support—particularly during “danger close”–type situations where friendly troops are in close contact with the enemy—until the technology has been proven operationally in other roles. Human pilots have a hard enough time with those types of missions. Oftentimes, even human pilots cannot determine friend from foe—depending on the scenario—as the U.S. Central Command is discovering over Iraq and Syria with the ongoing fight against the Islamic State terrorist group.
While at present some technological limitations do exist, those are not likely to remain roadblocks forever. Autonomous technology is advancing rapidly and could one day be precise and reliable enough to not only distinguish correct targets but also make proportionality judgments in complex scenarios based on parameters programmed into the machine. Those parameters would not be unlike rules of engagement given to human operators. Already, cameras and software exist that can identify individual human faces. Once a target is precisely identified, it would not be a huge leap then for an autonomous system to use a low-collateral-damage weapon to eliminate hostile targets while minimizing harm to civilians.
Much of the objection to fully autonomous weapons seems to stem from a sense of “fair play” rather than any real legal issues—most of which are likely to be overcome. But any time new weapon technology emerges, there is opposition from those who believe that the technology fundamentally unbalances war.16 Objections have been raised throughout history to new technologies ranging from crossbows and longbows to machine guns and submarines because the use of such weapons was considered to be “unfair” or “unsporting.” But ultimately, the use of such weapons became a fact of life. War is not a game, and as Anderson and Waxman write: “The law, to be sure, makes no requirement that sides limit themselves to the weapons available to the other side; weapons superiority is perfectly lawful and indeed assumed as part of military necessity.”17
Indeed, there is no legal requirement for war to be fair—in fact throughout history war has been anything but.
1. Jan van Tol, Mark Gunzinger, Andrew Krepinevich, and Jim Thomas, “AirSea Battle: A Point-of-Departure Operational Concept,” Center for Strategic and Budgetary Assessments, www.csbaonline.org/publications/2010/05/airsea-battle-concept/.
2. Ibid.
3. Ibid.
4. DOD directive 3000.09, www.dtic.mil/whs/directives/corres/pdf/300009p.pdf.
5. Ibid.
6. Ibid.
7. Human Rights Watch, “Losing Humanity: The Case against Killer Robots,” www.hrw.org/reports/2012/11/19/losing-humanity-0.
8. Ibid.
9. Ibid.
10. Ibid.
11. Benjamin Wittes, Brookings Institution, www.lawfareblog.com/2012/12/does-human-rights-watch-prefer-disproportionate-and-indiscriminate-humans-to-discriminating-and-proportionate-robots/#.UunS7GRdXB8
12. Kenneth Anderson and Matthew Waxman, “Law and Ethics For Autonomous Weapon Systems,” Hoover Institution http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2250126.
13. Ibid.
14. Wittes, Brookings Institution.
15. Anderson and Waxman, Hoover Institution.
16. COL Lawrence Spinetta, USAF, commander of the 69th Reconnaissance Group, interview with author.
17. Anderson and Waxman, Hoover Institution.