Frankenstein is not about the monster. Of course, Mary Shelley’s iconic monster is arguably the most compelling character in her novel, and it is most certainly indispensable, but the story is about the doctor, the creator whose moral failure led to him losing control of his creation. Dr. Frankenstein became obsessed with the idea of creation, the scientific possibilities of bringing a new species into being, the goodness that he would accrue as creator. The moral failing of the doctor was not that he created life, but that he did not think through the implications of his actions, moral and physical, and was overwhelmed by the living being that he unleashed. “He abhorred his creature, became terrified of it, and fled his responsibilities.”1
The reason this fictional character from a 200-year-old novel matters to us modern Sea Service professionals is that we are sailing into a world in which we can build and deploy weapons with sensing and decision-making qualities that mimic those of humans. Upgraded technologies, new radars, laser, and infrared-imaging systems are creating “machine vision” that helps identify patterns in images, which looks very much like human vision.2 In addition, neural networks have been developed that use layers of networks and increasingly muscular data-processing capabilities of computer chips to process images through numerous levels of abstraction—qualities of colors on one level, edges and shadows on another—and eventually teach the machine object-recognition with success rates that rival those of humans. Object-recognition software running on neural networks has been developed that can tell the difference between photos of two nearly identical dog breeds, the Cardigan Welsh Corgi and the Pembroke Welsh Corgi.3
It is not much of a stretch to imagine that object-recognition software that can tell the difference between similar breeds of dogs could be used to tell the difference between, say, a passenger ferry and an amphibious transport dock. Weapons that could autonomously sort through the coastal merchants and fishing boats and passenger ferries to find and attack enemy ships would simplify the detect-to-engage process—and would reduce the assets and effort needed to build and maintain a tactical picture of sufficient granularity and accuracy to support targeting enemy forces over the horizon.
Such a weapon system would provide invaluable warfighting efficiency. One of the challenges of being a global Navy with fleet concentrations on either side of a continent is that, while we are a large Navy, we are also everywhere, which makes our numbers limited in any given location. It is one thing to have ten aircraft carriers, it is another to get all ten in one place at the same time. The trade-off we face in having a global presence is that we spread our forces thinner over a larger area than do our potential adversaries. Assuming that a future enemy would be smart enough to initiate and prosecute hostilities before we were able to amass overwhelming force, our forward-deployed forces on scene would likely be outnumbered. A combination of machine vision and the sensing and decision-making of a neural network, maximizing the potency of the finite capacity of outnumbered friendly forces, could tilt the playing field in the favor of the U.S. Navy when pitted against a regional power.
Such an antiship cruise missile is but one example of what we can build—today. Another example, the British “Brimstone” missile, has the ability to tell the difference between a bus and a tank, and it can autonomously hunt and attack targets in a designated region and coordinate attacks with other missiles.4 The evolution of technology shows no signs of slowing, so more types of and more complex weapons are just over the time horizon. While the ethics of killing individuals using drones controlled by humans thousands of miles away currently causes understandable angst, such concerns are an ethical pittance next to the concept of creating and loosing weapons that make such decisions without human intervention.
Beyond just pointing out the moral failing of its protagonist, it is worth noting that the novel Frankenstein was an Industrial Revolution-era retelling of the ancient Hellenic creation myth of Prometheus. Why is this worth noting? What we are discussing is the creation of a machine with the ability to travel outside the bounds of human control, to sense in a human way what is in the environment, to distinguish between various objects and to decide without human intervention what is attacked and what is not attacked, what is killed and what is allowed to live. We are stepping across to the deity’s side of the creation myth. Whether or not one believes that humans have free will, one must at least stop to ponder how much free will we want to create and enable within our weapon systems.
As we work through the answers to that question, we must guard against two trains of thought. The first is a complete disregard for legal and moral implications of artificial-intelligence-enabled weapon systems. This disregard is based on either ignorance of the need for any sort of rules governing the lawful use of weapons or the unbridled excitement regarding the possibilities of such systems. One need only remember the Internet prophets of the 1990s, whose predictions fueled the dot-com boom and subsequent bust, to understand the danger from this line of thinking. Ignorance and intoxication are not foundational to ethical or legal actions in any realm, most certainly not the realm of warfare.
The second equal and opposite train of thought is a fearful reaction that new technologies are evolving faster than ethical thought and that artificial-intelligence integration into weapon systems must be delayed or halted until new rule sets are developed and implemented globally. This school of thought ignores the fact that the foundation for the ethical use of unmanned, robotic weapons is already in place. The decisions we will be asking commanders to make are effectively the same decisions we have trained commanders to make for decades, and the general principles of armed conflict upon which those decisions are based are robust enough to guide us as we cross the coming ethical horizons.
These decisions which have been thought through for decades are governed by the four interlocking general principles of the law of armed conflict: distinction, military necessity, proportionality, and unnecessary suffering.5 The first of these requires that military forces both distinguish themselves from civilian entities, as well as between military targets and civilian non-targets when launching attacks. This matter of distinction between targets located over the horizon has posed a series of challenging questions for decades for any navy that has kept antiship cruise missiles in its arsenals. Does the person deciding to release the weapon understand which ships are enemy, neutral, or friendly, and is that information sufficient to provide the weapon the necessary information to attack the correct object in the minutes later when it reaches the target area? Even if a particular nation cares not an iota about the laws of armed conflict, these questions must be answered for commanders to have a grasp of the effectiveness and efficiency of their forces as they prepare for potential wars.
The technologies being developed that are sophisticated enough to distinguish between types of Corgis will almost certainly improve weapon-system distinction between enemy combatants and noncombatant shipping. In fact, such a single-minded system, unconcerned with survival, dispassionately surveying the battlespace, will arguably do better at telling military targets from civilian non-targets than will humans in combat conditions. Thus, adherence to the law of distinction, at least in war at sea between opposing navies, will improve by default through the evolution of “Frankenweapons.”
‘Taking Man Out of the Loop’
Proportionality and military necessity, on the other hand, are subjective principles, which require commanders to use judgment in deciding whether targets to be attacked are of sufficient military value that the possible collateral damage from the attacks would be outweighed by the military value of the intended targets.
If you have been a combat information center watchstander on a U.S. Navy ship, you have been exposed to the idea of the ethical use of autonomous weapons, even if you did not think all the way through the implications of the weapon-system capabilities. The Aegis weapon system, for example, has several modes of doctrine (and has had since the 1980s), which are algorithms that help watchstanders make decisions in a number of ways. First, these modes automate many routine logic functions to reduce the mundane cognitive load on humans and enable them to really think about the constantly developing reality of the environment in which they are operating. At the most automated end of the spectrum of capabilities is “auto-special” doctrine, which is “used to reduce reaction time and human errors when a fast-moving, anti-ship cruise missile contact is detected in very close proximity to the ship and poses an imminent danger. With Auto-Special Doctrine, once a detected contact meets the human-provided specifications, the ship’s combat systems will automatically engage the hostile missile with surface-to-air missiles.”6 This is what is meant by the term “taking the man out of the loop.”
In the most basic terms, the decision to activate auto-special doctrine is the decision to defer to a computer and its algorithm the decision of what targets are attacked. That is not a decision that anyone takes lightly, because those who make that decision, commanding officers of ships, have been trained on and have experience with the weapon system to which control is being ceded. Thus, they understand how the system works and the potential for collateral damage or fratricide; however, under such a situation the military necessity of taking the man out of the loop—the density and proximity of enemy forces (which could be manned or unmanned) that are actively trying to kill you and sink your ship—outweighs the risk of collateral damage and fratricide.
Regardless how much control we retain over them, we humans must be held responsible and accountable for the decisions that weapons with artificial intelligence make. Weapon systems with the best technology available today, and certainly those available in the future, might make better decisions than humans, but the question is not a matter of percentages. Do we want machines to make decisions as to the relative value of potential military targets versus the potential collateral damage and death that might be caused by the attacks? We do not.
In discussing proportionality in “The Contemporary Law of Armed Conflict,” Leslie C. Green writes, “Although the decision as to proportionality tends to be subjective, it must be made in good faith, and may in fact come to be measured and held excessive in a subsequent war crimes trial.”7 A judgment call made “in good faith” suggests a decision that must be made by a human being, but the fact that the judgment could come to be held excessive in a war crimes trial demands that it is a decision that can only be made by a human being, specifically, a human being who is a lawful combatant. Lawful combatants include “members of the regular armed forces of a State party to the conflict; militia, volunteer corps, and organized resistance movements belonging to a State party to the conflict, which are under responsible command, wear a fixed distinctive sign recognizable at a distance, carry their arms openly, and abide by the laws of war; and members of regular armed forces who profess allegiance to a government or an authority not recognized by the detaining power.”8 The key point in distinguishing between combatants and noncombatants is not a uniform or membership in regular armed forces, it is whether or not the individual in question is subject to military discipline that enforces compliance with the laws of armed conflict.9
Can a machine be subject to military discipline and held accountable for violations of the laws of armed conflict? No. Only humans can truly be held accountable for war crimes. “I was following orders” is not considered an acceptable defense, because we expect lawful combatants to exercise human judgment and not behave as mere automatons; therefore, we cannot absolve human accountability by claiming that “The machine decided.” For that to be an acceptable defense for an alleged war crime committed using a weapon with artificial intelligence, the human who decided to employ the weapon must have thought through the possible outcomes and made the best possible subjective judgment based on the general principles of the law of armed conflict. A human must be responsible and be held accountable for what the weapons do. Thus, Frankenstein’s monster would have been a lawful weapon, but it would not have been a lawful combatant.
The Perils of Prometheus
Under the fourth of the interlocking principles, the law of armed conflict prohibits weapons that are calculated to cause unnecessary suffering of combatants. In practice this principle is addressed through treaties, conventions, and Department of Defense reviews before the weapon is ever purchased.10 The question here is not a matter of can we build it, but should we build it? Dr. Frankenstein, for example, thought little of the moral implications of bringing a being into existence, of endowing it with strength greater than he could control. In fact, the size and proportion of the monster were determined purely as a matter of expediency: “As the minuteness of the parts formed a great hindrance to my speed, I resolved, contrary to my first intention, to make the being of a gigantic stature. . . .”11 As a result of the size and power of his monster, Dr. Frankenstein was impotent when his creation began causing human suffering. Our own decisions as to what weapons we field and the treaties we will sign need to hinge on the factors of control—do lawful combatants have the ability to shut down a weapon system at any time deemed necessary—and accountability—are those in control of the weapons lawful combatants who can be tried for war crimes resulting from failures of judgment? Human control over and accountability for the actions taken by weapons must be final and absolute.
Regardless of where one stands on the legal, moral, or even spiritual dimensions of introducing armaments with ever more sophisticated artificial decision-making capabilities, the brave new world of increasingly intelligent, human-like weapons is rising inexorably. That does not, however, mean that we are helpless before the flood. To assert such is intellectually and morally lazy. Our choice is to either set the standards now, establish customary law, formalize rules in treaties, or wait and react to behaviors that we find objectionable later. The doctors are in their laboratories, furiously creating. Those of us charged with the care and feeding of the creatures coming to life must think through our responsibilities before the beasts of burden on the horizon become, like Frankenstein’s beast, the monsters of our own destruction.
How, then, should we enter this brave new world? I offer the following:
• Start with the rules that are already in place. The existing principles of the law of armed conflict and the definition of a lawful combatant do, in fact, provide a framework for the ethical creation and employment of unmanned weapons.
• As is now the case with Aegis weapons doctrine or over-the-horizon antiship cruise missiles, those who decide to release a weapon must understand the weapon well enough to know how the weapon will decide what to attack.
• A human who is a lawful combatant must make an informed proportionality and military-necessity judgment prior to releasing the armament. To release the weapon without such an understanding is, at best, irresponsible, and, depending on the outcome, could well be unlawful.
As we move into a world where we create beings that we endow with characteristics of our choosing, we would do well to remember the creation myth upon which Shelley based her story. Prometheus was faced with giving qualities to humans after all of the good qualities—such as speed, strength, fur for warmth—had been given to other animals. And so, he gave to man the gift of fire.12 With it, humans were able to warm themselves and cook their food. In time, unleashed and egged on by the gods, humans used the gift to burn Troy to the ground.
It is one thing to use science to create machines that compensate for the frailties of humans or expand the reach and capacity of our powers; it is quite another for us to flee our responsibilities as creators based on the argument that our moral characters are too frail to control the powers created by our science. The commitment to create must be complemented by the commitment to control.
2. John Markoff, “Fearing Bombs That Can Pick Whom to Kill,” The New York Times, 11 November 2014.
3. “Rise of the Machines,” The Economist, 9 May 2015, 18–21.
4. Markoff, “Fearing Bombs.”
5. Department of the Navy, Department of Homeland Security, The Commander’s Handbook on The Law of Naval Operations, July 2007, 5-2, www.jag.navy.mil/documents/NWP_1-14M_Commanders_Handbook.pdf.
6. LT Sharif H. Calfee, USN, and Neil C. Rowe, “Multi-Agent Simulation of Human Behavior in Naval Air Defense,” Naval Engineers’ Journal, vol. 116, no. 4 (Fall 2004), 53–64.
7. Leslie C. Green, The Contemporary Law of Armed Conflict, 2nd ed., (Manchester, UK: Manchester University Press, 2000), 351.
8. Commander’s Handbook, 5-4.
9. Green, Contemporary Law, 113–14.
10. Commander’s Handbook, 5-3.
11. Mary Shelley, Frankenstein, classic edition (New York: Bantam, 1981), 38.
12. Edith Hamilton, Mythology (Boston: Little, Brown, 1942), 85–86.