This article is a reprise of a talk that was given at Naval Postgraduate School on 16 August 2021.
Logistics has been a backdrop for the entire duration of modern warfare. New advanced computing abilities have recently emerged, including Artificial Intelligence (AI), to assist in decision making at compute speed. Much consideration has been given to the ethics and circumstances under which the United States would use AI systems to conduct attacks on an adversary. Currently, U.S. policy does not allow weapons to be employed without human involvement.1 This raises a second issue: under what circumstances would AI be allowed to make decisions that cause losses to U.S. forces? This is a key tension that must be considered by technologists, as well as legal and ethical thinkers.
Recently, there has been a confluence of at least three “once in a generation” events. First is the unusually severe pandemic. The second is a return to great power competition. The third is the accelerating pace of AI development and the changes it will impose on society—including warfare. These three threads have joined together to create a single theme with the potential to disrupt the tapestry of military theory and strategy to date.
Why these three? First, COVID has prompted new considerations about the length and risk of supply chains. Second, great power competition has made the United States and its allies reconsider the definition of “security.” More specifically, what the late Naval Postgraduate School Professor Wayne Hughes called “the sanctuary of the seas” that is proving increasingly fallible against the proliferation of sensing technology. Finally, AI is causing people to make decisions in a different—and not necessarily better—manner than before.
Of these three, perhaps the most disconcerting is the return to the great power competition. Leaders need to begin thinking about intentional sacrifice of units across all domains of warfare, particularly at sea. Notably, the last true fleet engagement involving the U.S. Navy—the Battle of Leyte Gulf—is almost beyond living memory. We no longer think in terms of losing ships or entire formations, much less intentionally choosing to lose ships or formations. The advent of AI feeds this lack of imagination by offering the temptation that all the problems would be solved if only the systems were smart enough.
Traditional logistics systems, both civil and military, operate on a “push-pull” principle. When units need supplies, they requisition them. A central authority creates a schedule for logistics assets and executes the plan. The mechanisms have changed over time; first by hand, then to crude spreadsheets, to more advanced tools such as the Replenishment at Sea Planner (RASP), and most recently the Defense Advanced Research Projects Agency’s (DARPA) Complex Adaptive System Composition and Design Environment (CASCADE) effort. Most advances have been on the allocation end of the problem space, to wit: given a set of requirements and means to fill them, what is the most efficient/effective allocation of resources? While substantial progress has been made in solving this class of optimization problem, the true innovation has yet to be capitalized. It has already been noted that the U.S. logistics fleet needs substantial recapitalization. Some of this can be solved with AI, but much of it will require bending metal and adding new classes of ships—some of which will be “attrition tolerant,” the 21st-century equivalent of the Liberty Ships from World War II.
The AI revolution is nigh, but it is not the panacea for all problems, especially in the military realm. In the first MOSAIC wargame series, despite the introduction of AI and novel platforms, the most important consideration that determined the blue side’s success or failure was accurate estimation of the disposition of the adversary force. This is the same driver for non-AI command systems. Some things truly are fundamental.
True innovation will take place when individual supply assets—independent of any information other than what they can observe—use inference techniques to guess at what downrange units will need. For example, logistics units will estimate expenditures of various types of ammunition and will also infer where the assets that need to be resupplied will be in the future—as to correctly route the resupply assets, who have lead times measured in days instead of hours.
Should the contest be sufficiently fierce, the logistics systems may themselves decide which battles to reinforce, which to resupply, and—most critical for these purposes—which battles to forsake and what circumstances warrant the intentional sacrifice of a combat formation or logistics asset.
If this proposition is jarring, good. It is far better to be jarred on paper, or in a simulation, than to be jarred in real life. It is better to evaluate underdeveloped assumptions about AI and logistics now rather than once again experience the meaning of the maxim, “mistakes are written in blood.”
If the problems being worked in exercises and wargames are not sufficiently difficult to force decisionmakers to accept that certain ships will need to be sacrificed, either because of adversary action or the inability to resupply, then the scenario being fought is not tough enough. This is not a novel concept. Truly insightful thinking will originate not from assuming the best outcome possible, but from considering less-than-satisfactory courses of action with which the United States will likely be confronted in a future conflict against a peer competitor.
In the coming months and years, technologists, legal thinkers, and ethical scholars will be forced to reckon with future scenarios where the hardest AI problems do not stem from technological limitations, but rather ethical and policy dilemmas. Specifically, in the growing AI field, we have spent so much time thinking about the ethics of taking adversary lives via automated decision making that we have not fully considered the prospect of sacrificing friendly lives with those same decisions. The ethical questions of this potential policy cannot be solved today but must be considered moving forward. Without serious thought and analysis behind it, these decisions will invariably fall to commanders at various echelons under stress and fire, wishing policymakers had thought through these issues before bullets and missiles began to fly.
1. To include both “human on the loop” and “human in the loop.”