If supply chains are disrupted during a conflict in the Pacific, commanders will have to figure out—on the fly and often separate from one another— how to get logistical support through other means. However, they may not have the information they need to get that support in the fastest and most secure ways possible. And, if they tap supplies from alternative sources, they may not know how that will impact the missions originally designated for those supplies.
Deep reinforcement learning—an emerging form of AI—may soon make it possible for the joint forces to create what might be thought of as a self- healing supply web. For missions in the Pacific, for example, this web would provide commanders with a constantly updated, near-real-time operational view of supply chains, platforms, and recommended routes.
If, for example, a port in the Pacific were lost, commanders would see a revised operational view showing new locations where the replenishment could come from, and the best ways to get it to the various points of need. The supply web would also map out the cascading implications of the lost port, showing how OPLANs and
missions across the Pacific would likely be affected. Dashboards would allow commanders to work together to choose among the best alternatives, based on mission priorities, rapidly changing conditions, and other factors.
Transforming From Supply Chain to Supply Web
The supply web would be created by securely bringing together a wide range of siloed supply-chain data from across the DoD, and consolidating it in a data mesh. Advanced analytics then use that data to map out the entire supply chains for various OPLANs, operations and exercises. The data includes all the relevant ports, airfields and other supply sources, as well as the current and expected stores of fuel, munitions and other supplies at those locations. Various other aspects of the supply chains are also integrated into the supply web, such as capacity, routes, and expected demand.
The next step in creating the supply web is to bring in deep reinforcement learning, a goal-based form of AI. Deep reinforcement learning uses numerous possible scenarios—created through modeling and simulation— to learn what works and what doesn’t to achieve a particular goal.
A key feature of deep reinforcement learning is that it essentially adopts the point of view of a particular entity— whether a person, a country, or for example, the joint forces and allies and partners in the Pacific—and then tries to achieve that entity’s goals. With the supply web, the reinforcement learning might look at hundreds of thousands of scenarios of how a conflict in the Pacific could unfold, learning not just what will help a particular mission, but what is needed by the allied forces as a whole.
Even before logistics become contested, the supply web would serve several important functions. The supply web’s AI looks at the current supply chain operational view, and evaluates numerous scenarios—far more than human planners could—to identify for commanders optimal ways to support OPLANs and missions.
When there are changes to supply chains, such as shortages or delays in moving shipments, that information is automatically fed into the data mesh. The supply web is constantly pulsing the mesh, staying fully updated.
At the same time, the AI uses intelligence and other information to predict where supply chains are most vulner- able to disruption by adversaries, including through kinetic warfare and cyberattack. Such vulnerabilities might be hidden, and revealed only by working through many thousands of possible scenarios. This gives commanders the chance to address those vulnerabilities before a conflict— for example, by moving supplies afloat or to locations that adversaries may
be less likely to attack.
A Self-Healing Web
The supply web becomes particularly valuable if supply chains are disrupted. If a replenishment ship is lost, for example, the deep reinforcement learning reconfigures—in near-real time—how supplies can get to the points of need. In this sense, the supply web is self-healing. One reason this healing happens so quickly is that the AI doesn’t start from scratch—it uses what it has already learned through the hundreds of thousands of course- of-action scenarios.
In a conflict, the supply web would continuously reconfigure logistics in any number of ways. For example, a multi-day battle in the Pacific might take a carrier strike group
so far afield that many of the ships would run out of missiles before the expected replenishment could reach them in time. The AI would run new scenarios—leveraging its prior learning—to work out the how supply chains could be rearranged so that the group could get the replenishment.
Currently, it is difficult for planners to get a full understanding of how multiple supply chain disruptions in a conflict might cascade across an AOR. Supplies will have to be rerouted in numerous ways, all at the same time. If a carrier gets fuel from a new location, what does that mean for the missions that were originally meant to get that fuel?
The supply web’s deep reinforcement learning would work out these implications in near-real time as the disruptions occur. And instead of reconfiguring supply chains mission by mission, it would work toward the ultimate goal—winning the war.
The supply web would not take away decision-making from commanders. Rather, it would give them more hard data to work with, and help them make faster decisions. For example, the supply web might present one alternative that will get supplies to the point of need faster, and another that will be slower but more secure. Commanders still need to use their experience and judgment to decide on the best path.
Commanders across an AOR can work together on the alternatives through common, interactive dashboards. They are able to ask questions of the data to gain more insight for decisions. And, they can add in new information for the supply web to
consider, such as changing conditions, mission priorities and OPLANs.
Col. Boyd Miller ([email protected]) is a contested logistics subject- matter expert in Booz Allen’s artificial intelligence division. He has more than 30 years of experience in defense, joint, and maritime logistics operations, including as J4 Director, United States Southern Command.
Ki Lee ([email protected]) is Booz Allen’s Global Defense Technology Officer. He drives technology innovation, application, and adoption for Booz Allen’s global defense business, with a focus on supporting mission needs and gaps.
Scott McCain ([email protected]) is a veteran logistics and sustainment expert at Booz Allen who designs, develops, and implements cognitive-enhancing software solutions that enable a wide range of DoD clients to address the complex challenges of contested logistics.