How AI Can Help The Joint Forces With Persistent Targeting
One of the thorniest challenges in the Indo-Pacific is persistent targeting—how can the joint forces keep track of a constantly changing array of often fast-moving targets, over vast open spaces, against adversaries adept at hiding what they’re doing? How can you make sure you’re always matching up the right sensors with the right targets, and at exactly the right times, so you can maintain custody on critical targets with the needed handoff from
one sensor to the next?
These are complicated problems that require rapidly bringing together and analyzing, in real time, a growing ocean of information on both targets and sensors—something that is becoming increasingly difficult using conventional manual approaches. However, those are just the kinds of problems that artificial intelligence solutions are well suited to handle. With advances in machine learning
and other forms of AI, the joint force now has the tools and opportunity to make an exponential leap in persistent targeting in the Indo-
Pacific and elsewhere.
Gaining Situational Awareness
Establishing and improving situational awareness through the use of AI starts with a robust capability to gather, store and process large amounts of data. Fortunately, today there are data platforms that can securely bring together the full range of data that the joint forces collect on targets and sensors. These platforms can seamlessly accept data from any source, and in any format, and make it
fully available to AI and other data fusion and analytic applications.
The application of trained AI models on these large sets of data can then result in rapid target identification, factoring in current or last known locations, as well other target characteristics. These models can also correlate other sensor information about a target, such as patterns in the electromagnetic, acoustic and IR signatures.
Predicting Target Paths
Properly trained AI models also can predict where targets are likely to go, so operators can optimize potential sensor-to-sensor handoffs to maintain persistent targeting and help commanders maneuver their forces in advance of adversary action. The AI
models do this by analyzing historical data on the adversary targets and actions, looking for behaviors and patterns, such as where those targets have gone in the past in particular circumstances. For example, when there’s a certain combination of adversary aircraft flying in a “package”—such as two tankers, four bombers and six fighters—what kinds of missions did such a group execute in the past and what flight paths did they tend to take? How have such patterns been changed in the past by our responses, and by other factors, such as the weather?
The power of AI comes from its ability to combine vast amounts of historical data with the current context from any number of sources, such as intelligence, political developments, and weather. This can then provide commanders with likely paths for targets of interest and assign confidence and probability values to the different potential target movements.
Predicting Sensor Accuracy
AI solutions can also identify which available sensors are best suited to maintain target custody, and can continuously perform sensor-target pairings, at machine speed, with automated handoffs—across large geographies with multiple targets and multiple sensors. For example, based on the historical data, which types of sensors have been most successful in tracking targets with certain characteristics? Which sensors are most accurate in a particular combination of environmental factors? AI models, for example, can account for water depth, sound-velocity profiles and arrival path in tracking a submarine, and also factor in the sensor’s position relative to the target. Such AI solutions can then help optimize the sensor-target paring, ensuring the right sensor is on the right target at the right time.
AI also can look many moves ahead, to identify the best sensors—not just for the upcoming handoff, but for the= next handoff and the next ones after that. As the targets move, AI models can continually update “best-sensorto-use” calculations, in the same way that a smartphone map application continually reconfigures for the fastest route. The ability to project a complex target-tracking scenario five, ten or twenty moves ahead at machine speed can provide commanders with a huge information edge in a rapidly unfolding scenario.
Prioritizing And Orchestrating The Sensors
It’s not uncommon that a particular sensor is needed for two different targets at the same time. How does the commander decide? Here again AI can help. It starts by evaluating the targets themselves and ingesting the commander’s target prioritization and the likelihood of the loss of target custody. For example, a commander may prioritize a highly accurate sensor for a high-priority target. But if the custody of that high-priority target can be assured with a different sensor for a short period of time, then the highly accurate sensor could potentially be re-tasked and then returned to the high priority target without any mission degradation. That would free
up the more accurate sensor to provide information on a target that might otherwise be difficult to acquire. The promise of AI is that it can sort out much of this complexity in real time to maintain persistent targeting and custody on multiple targets in an ever-changing environment. AI solutions can also deal with changing commander priorities, changing environmental factors, sensor degradation, and adversary counteractions all at machine speed—delivering the commander a synchronized battlespace-awareness plan optimized for both sensor and targets.
These AI solutions also learn over time. As they get “smarter,” they can better sort out which combinations of sensors are most effective at tracking which targets and under which conditions. As models incorporate more data and the results of human decision making across many different scenarios, they will also improve anomaly detection, target path prediction, and synchronized sensor target pairing.
Staying Ahead Of Adversaries
As the battlespace in the Indo-Pacific and other areas of interest becomes increasingly complex and crowded, and as adversaries get more skillful at hiding their intentions, persistent targeting will only get more difficult. Integrating AI solutions into today’s operations can give the joint forces a strategic edge.
LT. GEN. CHRIS BOGDAN ([email protected]) is a Booz Allen senior vice president who leads the firm’s aerospace business, delivering solutions to DoD, NASA, and commercial clients. As a 34-year U.S. Air Force officer and test pilot, he flew more than 30 different aircraft types and was the Program Executive Officer for the F-35 Joint Strike Fighter Program for the Air Force, U.S. Navy, U.S. Marine Corps, and 11 allied nations.
PATRICK BILTGEN, PH.D. ([email protected]) is the director of AI mission engineering at Booz Allen, leading data analytics and AI development for space and intelligence programs. He is the author of Activity-Based Intelligence: Principles and Applications, and recipient of the 2018 Intelligence and National Security Alliance (INSA) Edwin Land Industry Award.
Check out more sponsored articles.
How AI Can Help Integrate Allies And Partners In The Indo-Pacific
One of the challenges in integrating the U.S. and its allies and partners in the Indo-Pacific is that there is a great deal of complexity in how a potential adversary might engage each of the different countries in different ways leading up to a conflict——tactically, strategically, economically, and politically. And there is just as much complexity in how each country might respond in its own way.
It is difficult for wargaming and exercises to fully capture this complexity, with its clues to effective mission-partner integration. However, an emerging form of AI known as reinforcement learning can play an important role. Essentially, this technology makes it possible for each country in a virtual wargame— whether an adversary, the U.S., an ally, or a partner—to be represented by its own AI “agent.”
Each agent—a sophisticated algo- rithm— brings together and analyzes vast amounts of data about that country, including its military capabil- ities, its political and economic environment, and its posture toward the other nations. A unique feature of reinforcement learning is that allows the AI agent to pursue its own best interest, so that in a wargame repre- senting a country, the AI behaves much like that country would.
This can provide valuable insight into the often-difficult challenges of mission-partner integration. For example, an AI agent representing a critical partner in the Indo-Pacific might discover, over multiple scenarios, that certain security cooperation activities would likely elicit economic or diplomatic pressures from an adversary, and that the best course of action would be to disengage and remain neutral.
Or, the AI agent might find that that if allies or partners have certain defensive weapons or other protections in place before a conflict, that would deter—or at least defer—adversary aggression. Such AI-informed scenarios can help map out the steps needed to make sure our allies and partners get the capabili- ties they to maximize deterrence.
Defense organizations are already beginning to use reinforcement learning in operational planning, by wargaming how opposing forces might engage tactically in battle. But rein- forcement learning can go even further, by helping to integrate the U.S. and its allies and partners in the Indo-Pacific through all phases of competition, crisis, and conflict, to help create a force of forces.
How Reinforcement Learning Works
With reinforcement learning, algo- rithms try to achieve specific goals, and get rewarded when they do. Using trial and error, the algorithms test out random possible actions. The closer those actions get the algorithms to their goals, the higher their score. If the actions move the algorithms way from their goals, the score drops.
In this way, the algorithms can rapidly work through thousands or even hundreds of thousands of scenarios, in a game-like setting, to determine the best course of action. With each iteration, they learn more about what works and what doesn’t, and get closer and closer to the optimal solution.
Because the algorithms can perceive their environment in a virtual wargame, and participate autono- mously, they are considered to be AI agents. And reinforcement learning is well suited for wargaming. An AI agent
can take a side and play a role, trying to achieve its own specific goals and learning as it goes along. Just as important, multiple agents in a wargame—for example, representing various allies and partners in the Indo-Pacific—can learn how to best work together to achieve common goals in the face of an adversary.
Virtual wargaming is just one example of how reinforcement learning can assist defense organizations. It can also help optimize weapons pairing, the kill chain process, cybersecurity, and other challenges.
How Reinforcement Learning Is Trained
The process of integrating allies and partners with reinforcement learning begins by bringing together a wide range of data about a particular country. In addition to information on the country’s military and other resources, it can include its recent history—for example, how an ally’s economy and politics were affected by outside pressures in the past, and how the country responded when faced with certain pressures from an adversary. All this information teaches the AI agent what kinds of actions it might see from agents representing other countries, and what kinds of actions it can take on its own.
At the same time, the AI agent is provided with that country’s goals, based on the knowledge of experts on its culture, politics, economy, military, and other areas. The agent is then programmed to use the actions at its disposal to achieve those goals. While it may be impossible to capture the full picture of a country—or the complete international environment—even limited AI agents, interacting with one another, can provide important insights. And as new information about countries is added into the mix, AI agents continually learn.
Reinforcement Learning In Action
In a virtual wargame, AI agents for the adversary, the U.S., and various allies and partners enter a scenario and begin interacting with each other autonomously—each balancing its own strengths and weaknesses to achieve its goals the best way possible. In one scenario, for example, an adversary might try to use economic or diplo- matic coercion against a number of different allies and partners at the same time, or launch sophisticated disinformation campaigns designed to pit countries against one another and break apart the coalition.
With each country pursuing its own best interest, the AI agents can reveal how they might work together against the adversary, or splinter from the others. A partner in the Pacific might decide to provide some assets to the coalition, but not others. An ally might be particularly susceptible to an adversary’s disinformation campaign, and refuse to cooperate with other allies or partners. These kinds of scenarios can suggest actions the U.S. and its allies and partners might take, which they can then try out as the virtual wargame continues.
A wargame can play out with hundreds of thousands of iterations, giving the AI agents the chance to try out any number of possibilities, and find the best solutions. Throughout the process, domain experts continually verify the AI agent’s goals and actions, making sure they accurately reflect the real world.
Reinforcement learning doesn’t replace current approaches to wargaming, planning and other activities. Rather, it is a powerful tool to aid decision- making, as leaders seek to integrate the U.S. and its mission partners into a potent force of forces in the Indo-Pacific.
Lt. Col Michael Collat ([email protected]) is a Booz Allen principal leading the delivery of data analytics, counter-malign foreign influence, and digital training solutions across USINDOPACOM. A former Air Force intelligence and communications officer, he has also led projects delivering cyber fusion processes, information operations assessments, and regional maritime and aerospace strategies.
Vincent Goldsmith ([email protected]) is a Booz Allen solutions architect providing transfor- mational technical delivery across USINDOPACOM. He focuses on wargaming, modeling and simulation, immersive, cloud, and AI solutions, and he partners with warfighters in region to integrate the latest innovative technology into their base- lines, to advance the mission.