Creating A Digital OPLAN Environment To Integrate Allies And Partners In The Indo-Pacific
Comprehensive operation plans (OPLANs) can help integrate the U.S. and its allies and partners across the Indo-Pacific—but to stay ahead of fast-moving changes in the region, it is increasingly important that the plans be frequently and rapidly updated. The challenge is that OPLANs tend to be static documents that often must be updated manually, a process that can be cumbersome, time-consuming, and incomplete.
However, by bringing their OPLANs into an interactive digital planning environment, the joint forces can use what’s known as “rapid modeling and simulation,” aided by AI, to test and refine their OPLANs—often as fast as conditions change. And they can use that same modeling and simulation to help put the plans into action in a confrontation.
A digital planning environment can be particularly valuable in integrating the coalition in the Indo-Pacific as a combined force of forces. The digital environment brings together vast amounts of data from across the coalition, making it possible to run tens of thousands of simulations to help planners determine how the U.S. and its allies and partners can work together in optimal ways.
And because the digital environment is interactive, planners can experiment hands-on with scenarios of their own—moving red or blue force assets in a particular area of the South China Sea, for example, and then watching as the AI-aided modeling and simulation predicts how a confrontation is likely to play out.
Planners can collaborate at the same time from multiple locations across the Indo-Pacific, including from allied and partner nations.
Nothing about this approach takes away decision making from planners or commanders. Rather, it gives them more hard data to work with, often in near-real time. They still need to use their experience, knowledge, and judgment to evaluate the data and update the OPLANs as they see fit.
BUILDING THE DIGITAL OPLAN ENVIRONMENT
Advances in data science are now making it possible to bring together and integrate an almost unlimited amount of OPLAN data from any number of sources. This includes all of the relevant time-phased force- deployment data now in spreadsheets, PowerPoint presentations, and other formats, which can be digitized through natural language processing and other techniques. Current OPLAN data can be combined with a wide range of unstructured data, from sources such as real-time intelligence reports, satellite imagery, acoustic signatures, and infrared thermography.
In addition, defense organizations can bring in large amounts of information about our potential adversaries, including detailed historical data—for example, how they have responded to certain activities by the joint forces in the past.
With this approach, all of the available data is ingested into a common, cloud-based repository, such as a data lake, and tagged with metadata. This breaks down stove-piped databases and makes it possible to analyze the entire repository of information— and all at once.
Although the data is consolidated, it is actually more secure than it would be in scattered, traditional databases. By tagging the data on a cellular level, defense organizations can tightly control who has access to each piece of data and under what circumstances.
TESTING AND REFINING OPLANS WITH RAPID MODELING AND SIMULATION
Once defense organizations have created a digital planning environment, they can test and refine their OPLANs with modeling and simulation, taking advantage of the combined information in the data lake to factor in tens of thousands of variables. With the help of AI, new rapid modeling and simulation tools can play out OPLANs’ courses of action, along with the branches and sequels, to determine the probability of coalition success every step of the way.
Planners might find, for example, that some bases would be at risk of running out of fuel or munitions during a conflict, or that certain U.S. aircraft would likely be more successful than others in particular missions. The AI might recommend courses of action, or specific branches and sequels, that planners may not have considered.
At the same time, advanced visualization tools, including interactive maps showing coalition and adversary forces, would allow planners to test out possible new scenarios. They might plug in different types of aircraft, for example, to see which are likely to be most effective, or pair manned and unmanned systems. Interactive visualization tools can also allow them to pose critical questions, such as whether a particular action would have a higher likelihood of success than others, but would cost more lives.
A digital environment also enables planners to take advantage of an emerging form of AI, known as reinforcement learning, to help predict adversaries’ first moves and subsequent actions. By analyzing vast amounts of data about a country— including its military capabilities, its doctrine, and its past actions— reinforcement learning can create an “AI agent” to represent that country in modeling and simulation. A unique feature of reinforcement learning is that allows the AI agent to pursue its own best interest, so that in modeling and simulation it would behave much like that country would.
RAPIDLY UPDATING OPLANS
Just as important, a digital environment makes it possible for planners to update OPLANs almost as fast as conditions change. New information— such as changes in coalition or adversary logistics and capabilities—is constantly fed into the digital environment. Ongoing AI-aided modeling and simulation quickly recalculates how current OPLANs are likely will play out and makes new recommendations.
Planners can see, often in near-real time, how they might need to modify their OPLANs. If they do decide to make changes, they can run their updated OPLANs through another round of modeling and simulation and see the new predicted outcomes. They can then continue to refine the plans as needed.
The same approach can help the joint forces make a seamless transition from operation plans to execution plans. As conditions rapidly cascade in a crisis or conflict, for example, decision-makers can quickly see the actions they might take that have the highest probability of success. Because the AI has already worked out tens of thousands of scenarios with the OPLANs, it can take advantage of what it has already learned to stitch together—in near-real time— new recommended courses of action.
The joint forces have a wealth of data available for operation planning. An interactive digital planning environment, along with AI-aided modeling and simulation, would allow them to take full advantage of that data to keep OPLANs updated and help integrate the allies and partners into a joint force of forces.
Maj. Gen. David E . Clary ([email protected]) is principal at Booz Allen, where he leads the firm’s support to coalition warfighters in the Republic of Korea.
Kevin Contreras ([email protected]) leads Booz Allen’s delivery of digital solutions for the rapid modeling, simulation, and experimentation of multi-domain concepts for DoD and global defense clients.
Doug Hamrick ([email protected]) leads Booz Allen’s development of AI-enabled predictive maintenance and supply-chain capabilities for clients throughout the DoD and other federal agencies.
Check out more sponsored articles.
Protecting Classified Algorithms In Unmanned Systems In The Pacific
In the coming years, the joint forces will increasingly use artificial intelligence in unmanned systems in the Pacific. Many of the algorithms will be mission-specific and classified, making them potential targets of adversaries who may try to steal or disrupt them.
Protecting classified algorithms in unmanned systems in the Pacific presents a unique set of challenges. Unmanned systems may operate closer to adversaries than manned systems. And with unmanned systems, humans may not be available to detect attacks on the AI and take corrective measures.
However, by adopting a series of rigorous protections across the entire lifecycle of the algorithms—through all stages of development and deployment—and by building in resiliency, the joint forces can help keep classified algorithms in unmanned systems secure.
Protecting The Algorithms During Development
Often, many of the essential elements of a machine learning algorithm will be built in an unclassified environment, to take advantage of the expertise and innovations of the wider organization. The algorithm is then moved into a classified environment, where mission-specific and other classified elements are added.
It’s critical that algorithms be protected while still in the unclassified environment. If an algorithm is stolen, an adversary may figure out its purpose and methods—even if it hasn’t yet been configured for a specific mission—and potentially develop countermeasures.
The joint forces can help protect the algorithms for unmanned in their early, unclassified stages through government-run AI/ML factories. Instead of relying on the industrial sector—which may not apply cybersecurity consistently—these factories can impose rigorous security controls through all phases of algorithm development, including both unclassified and classified. Many defense organizations are already moving toward this level of security with other types of software factories, and they can achieve the same goals with factories that specifically develop AI and ML.
At the same time, the joint forces can require that vendors adopt a comprehensive set of cybersecurity techniques when developing algorithms. Such measures include real-time threat-sharing, so that companies can take advantage of their collective knowledge, and cyber-as-a- service, so that there is active monitoring of systems and networks rather than just snapshot audits.
Protecting The Algorithms During Transfer And Testing
Extra protection is also needed when transferring algorithms from unclassified to classified environments, and when moving algorithms between the labs doing the development and testing. The longtime practice of moving electronic information from one system to another by people— known as the “sneakernet”—carries a risk that malware could be placed on the laptops, disks and other items used in the transfers. With advances in technology, there is now more security in an infrastructure that allows direct connections between systems with different security classifications, especially on research and engineering networks.
The joint forces can also take steps to protect classified algorithms for unmanned during the testing itself. When algorithms are being tested in real-world conditions, adversaries may be able to determine how they’re being used, or even steal them. One solution is to use digital engineering to test the algorithms with modeling and simulation. This not only keeps the algorithms from being exposed to adversaries during testing—it also makes it possible to simulate cyberattacks and model different defenses.
Protecting The Algorithms During Deployment
Classified algorithms require particu- larly rigorous protections once they’re deployed in unmanned systems. If a cyberattack corrupts the data being analyzed by the algorithms—or compromises the AI/ML systems themselves—humans may not be immediately aware that something is wrong.
One way of reducing the risk is to develop automated responses to data drift or model drift. If the data coming in from sensors is significantly different from what might be expected—potentially indicating a cyberattack—the AI/ML system might automatically shut down, or switch to data from other types of sensors. There is both an art and a science to identifying patterns in the data that might suggest a cyberattack, and establishing the thresholds that will trigger the automated responses.
Another step is to make it more difficult for a cyberattack on one AI/ ML system on an unmanned vehicle to spread to other components of the vehicle—for example, from algorithms analyzing radar data to ones analyzing video feeds or signals intelligence. Here, the solution is to create a separate security boundary for each AI/ML system on the unmanned platform. This makes it possible to more tightly control the flow of data from one system to another, and to cut the connections between systems, if necessary, to keep a cyberattack from spreading.
Additional steps can help protect classified algorithms in the event an unmanned vehicle is captured by an adversary. Along with anti-tamper measures—which can make it difficult for an adversary to access and possibly reverse engineer a captured AI/ML system—the joint forces can apply an approach known as disaggregation.
An AI/ML system—one that analyzes radar data, for example—typically has a complex collection of mission algorithms. With disaggregation, no single UV in a mission has all the algorithms. Each does just a portion of the analysis and sends its piece of the puzzle to a central processing location. The goal is that even if adversaries can overcome the anti-tamper measures on a captured AI/ML system, they won’t be able to glean enough information to unlock the secrets of the system and its algorithms.
Protecting The Algorithms With Resiliency
If cyber protections do fail, the classified algorithms on an unmanned vehicle need to be replaced as quickly as possible with new and better algorithms to maintain the mission. However, with conventional approaches, algorithms can’t easily be switched in and out—often the entire AI/ML system has to be rearchitected, which can take months. In addition, algorithms and other components in a system are often so interdependent that fixing one problem—such as switching out an algorithm—can create other, unexpected problems in the system, leading to rework and more delays.
Once again, the modular approach provides an advantage. Using open architectures and other open techniques, the joint forces can build AI/ ML systems that make it possible to quickly plug-and-play new algorithms and other components. In addition to helping maintain the mission, this has other benefits. AI/ML developers can regularly tweak the classified algorithms and replace them proactively—before any cyberattack—to make it difficult for adversaries to build up information on them. Plug-and-play also makes repurposing classified algorithms from one mission to the next easier and more secure.
Protecting classified algorithms on unmanned systems in the Pacific presents its own set of challenges. But by constructing strong cyber defenses throughout the algorithms’ entire lifecycle, and by emphasizing resil- iency, the joint forces can take steps to meet those challenges.
Jandria Alexander ([email protected]) is a nationally recognized cyberse- curity expert and a vice president at Booz Allen who leads the firm’s business for NAVSEA and S&T, including unmanned systems, resilient platform and weapon systems, data science, and enterprise digital transformation strategy and solutions for Navy clients.
Mike Morgan ([email protected]) is a principal at Booz Allen who leads the firm’s NAVAIR line of business. He has over 20 years of experience supporting NAVAIR programs with a focus on systems development and cybersecurity for unmanned systems and C4ISR solutions.
Boozallen.Com/ Defense
Check out more sponsored articles.
Protecting Missions From Cyber Attack With Real-Time Risk Maps
Perhaps the greatest challenge in protecting mission-critical systems from cyberattack is that there are so many possible ways an adversary could strike. A shipboard missile system in the Pacific, for example, might be disabled by an adversary that jams satellites or spoofs sensors, or disrupts command-and-control communications, or perhaps shuts off power to the cooling system of a building, a thousand miles away, that houses DoD computer servers. A single component of a mission- critical system might have dozens of such vulnerabilities, some well- known to cyber defenders—but potentially many others that are commonly overlooked.
The task of charting a system’s complex web of cyber dependencies, when done manually, can take months, even years. And even then, defense organizations often can’t capture the full range of downstream vulnerabilities that can endanger a mission.
However, new approaches, which take advantage of advances in machine learning and modeling and simulation, are now making it possible for the joint forces to create comprehensive maps of cyber risk to mission. With these maps, defense organizations can get a clear view of where their mission systems are most vulnerable to cyberattack, often in real time. Organizations can then prioritize their resources to best protect their most important missions.
Building A Risk Map Of “Probable” Dependencies
Defense organizations usually have good understanding of their information technology (IT)—their computer- connected systems—and so can protect those components with traditional cyber defenses. However, organizations don’t always know all the ways their computer networks rely on operational technology (OT), which can range from HVAC systems on a base to radar sensors on a ship.
Organizations theoretically could connect much of their operational technology to their computer networks. However, they’re reluctant to do so, because it would greatly expand the attack surface, providing many more ways a cyber attacker could gain access to the system. Unfortunately, that leaves defense organizations with limited visibility into their OT vulnerabilities. For example, an organization’s high-priority communications network might be using only one of 25 antennas at an airbase, but the organization doesn’t know exactly which one it is. Tracking down the right antenna would take time, and it isn’t feasible to manually go into that level of detail for every possible piece of OT. A single Navy base might have thousands of complex system dependencies.
However, defense organizations can take a different approach, by creating a map of probable dependencies with the help of machine learning. For example, an organization might not have the resources to fully protect all 25 antennas at the airbase, just to make sure the one being used by the high-priority network is covered. But if it could narrow down the number to four or so—based on the types of antennas commonly used with such networks—it might be feasible to put protections in place.
Machine learning can play a key role here. The first step is to provide machine learning models with the known IT and OT dependencies of various mission systems across the DoD, based on knowledge gathered manually over the years. The models would then look for patterns in the data, and predict a given system’s most likely dependencies—for example, certain types of antennas used by a certain types of mission systems. To make sure the machine learning models are accurate, cyber analysts would do regular spot checks, and work with AI experts to tweak the models as necessary.
Modeling And Simulation To Play Out Risk Scenarios
Once organizations have created a map of probable mission dependencies, they can use modeling and simulation to gain a deeper understanding of the vulnerabilities. By playing out various scenarios, the modeling and simulation might show, for example, how damage to computer servers on the ground could disable a particular satellite array, which in turn could prevent GPS signals from updating a carrier group’s inertial navigation. With such scenarios, defense organizations can gain insight into which vulnerabilities would have the most impact on a mission, and so know where to focus their efforts.
At the same time, defense organizations can use modeling and simulation to identify alternative paths if a mission dependency is compromised. For example, modeling and simulation might find that a high-priority mission system could quickly and successfully switch from one set of sensors to another—or perhaps could use the bulk of another system’s IT and OT dependencies if necessary.
All this information can be presented to cyber analysts and decision-makers with user-friendly dashboards and other visualization tools that show, at a glance, where potential vulnerabilities lie. The dashboard might show, for example, a mission system’s 100 or so probable dependencies, identifying the ones that are not fully protected.
Real-Time Monitoring Of Cyber Risk To Mission
Creating a map of mission dependencies is not a one-and-done job. On any given system, components are constantly being switched in and out as technology and requirements change. And as missions change as well, they might take on new vulnerabilities. Once the map of dependencies is created, however, it becomes easier to keep track of changes. Cyber analysts can log in new IT and OT components as they come online.
Because the modeling and simulation is run continuously, with each change it automatically looks for newly created vulnerabilities, and possible alternate paths if a mission dependency is compromised.
Protecting Missions Under Active Cyber Attack
Real-time monitoring of cyber risk to mission is critical if a system is under attack. Analysts can be alerted if a particular dependency is being attacked or has already been compromised. The alerts would show the likely impact to the mission—which could be minor or major—and present analysts with alternatives.
In some cases, the rerouting of dependencies might be automatic— for example, a missile system might move from one set of sensors to another. Other situations might require cyber analysts and decision-makers to step in to do the rerouting, using the dashboards and other visualization tools as guides.
With the help of machine learning, modeling and simulation, and other advanced approaches, defense organizations can build real-time cyber maps that show the often hidden ways missions could be degraded by adversaries. Organizations can use the maps to plug vulnerabilities as they arise, and move quickly to protect missions under active cyberattack.
Kevin Coggins ([email protected]) is a Booz Allen vice president working across the complex landscape of weapons systems, critical infrastructure, cyber, space and intelligence—including leading the firm’s PNT business. His journey as a force recon Marine, weapons system engineer, tech startup founder, Army SES and industry executive has enabled a unique perspective on solving the myriad of technology challenges facing the warfighter..
Dale Savoy ([email protected]) leads Booz Allen’s cyber warfare domain efforts in vulnerability and mission risk analysis. His focus is on defending DoD weapon systems and critical infrastructure from cyberattack, through mission-dependency mapping and vulnerability management.
Capt. Alan Macquoid ([email protected]) is a leader in weapon systems and critical infrastructure cyber risk assessment and mitigation efforts. He has over 35 years of experience integrating kinetic and non-kinetic effects with emphasis on cyber across all domains of warfare.
Check out more sponsored articles.
SPY-6: The Future of Navy Integrated Missile Defense
The following is an excerpt from an interview between Bill Hamblet, Editor-In-Chief of Proceedings, and Scott Spence, the Executive Director of Naval Integrated Solutions at Raytheon Missiles and Defense.
HAMBLET: What is SPY-6 designed to do? What are its threat targets and what advantages does it offer over other radars?
SPENCE: SPY-6 is an integrated air and missile defense radar. It can cover both missions simultaneously. It was designed to be modular and scalable, for all the different threats as well as the different ships it will go on. The first SPY-6, V1, is the 37 RMA [radar module assembly] radar, the largest in the family. It will go on the Flight III destroyers, starting with the USS Jack Lucas (DDG-125).
HAMBLET: It isn’t just for destroyers, correct?
SPENCE: No, SPY-6 V2 and V3 will go on amphibious ships and carriers. Overall, the radar will go on seven classes of Navy ships: Flight-III destroyers, Flight IIA backfit destroyers, the Ford-class carriers, and it will be backfitted onto the older aircraft carriers and amphibious assault ships.
HAMBLET: Earlier this year, Raytheon Missile and Defense was awarded a $651 million contract with options totaling up to $2.5 billion for full-rate production for up to 31 Navy ships. What’s the significance of that award?
SPENCE: It shows the Navy’s commitment to the radar as their signature program. It is being delivered across all the different variants, driving down acquisition and O&M costs for years to come.
HAMBLET: How is SPY-6 easier to maintain than earlier versions?
SPENCE: The radar only needs two tools to be maintained. It uses a common software baseline across all platforms, allowing the Navy to make a fix or add a capability into the software baseline and deliver it to all ships that need that capability. Modularity allow common training across all platforms. The largest cost of any system is O&M. Driving down those costs is critical to ensuring affordability for years to come.
HAMBLET: How does SPY-6 enable distributed maritime operations?
SPENCE: This radar is going to see farther and see smaller objects at longer distances, providing a better picture of the battlespace. Second, there are advanced capabilities being developed, including network cooperative radar, that allow the radars to communicate among themselves to provide a better picture of the battlespace. Gallium nitride technology in the transmitters allows it to create more power and see farther. Increased receiver sensitivity allows it to better process that information.
HAMBLET: Can SPY-6 integrate with other systems the Navy has fielded?
SPENCE: Yes. It is combat-management-system agnostic, so it can provide data to whatever combat management system needs it.
HAMBLET: Other countries are buying and building Aegis-class ships. Is there foreign interest in SPY-6? SPENCE: International partners want to work with the U.S. Navy, and the best way is to use the same technology. Because SPY-6 is combat-management- system agnostic, it can integrate with many different systems in multiple navies across the world.
HAMBLET: How does SPY-6 address the missile threats the Chinese military is fielding?
SPENCE: We’ve participated in flight testing with the Missile Defense Agency and Navy on hypersonic threat profiles. Because it can see smaller targets at greater range, SPY-6 creates additional battlespace to handle those threats. The more time we can give sailors to react to incoming threats, the better they’ll be able to defeat them.
More here: Proceedings Podcast Episode 294: Raytheon discusses the U.S. Navy’s SPY-6 radar
Check out more sponsored articles.
The Coming Of The CMV-22B To The Carrier Strike Group
The CMV-22B is no more a replacement for the C-2 Greyhound, than the MV-22 was for the CH-46. The MV-22 covered the functions of the CH-46 for the Marine Corps but represented a disruptive change which has transformed the USMC and its operations. The CMV-22 will provide the functionality of the C-2 for the carrier strike group but is entering the carrier strike group at a time of profound change, and it will contribute to it.
When I met with Vice Admiral Miller, the Navy Air Boss in February of this year, we discussed how the carrier strike group was moving from what might be referred to as the integrated air wing to the integratable air wing. In that interview, Vice Admiral Miller highlighted how the Navy was looking at the coming of the Osprey. It is a different aircraft, and the question will be as it operates effectively in its logistics mission, what other contributions might it make to the fleet?
So how should the Navy operate, modernize, and leverage its Ospreys? For Miller, the initial task is to get the Osprey onboard the carrier and integrated with CVW operations. But while doing so, it is important to focus on how the Osprey working within the CVW can provide a more integrated force.
“Vice Admiral Miller and his team are looking for the first five- year period in operating the CMV-22 for the Navy to think through the role of the Osprey as a transformative force, rather than simply being a new asset onboard a carrier. Such an approach is embedded in the rethink from operating and training an integrated air wing to an integratable air wing.”
A measure of the change from the C-2 to the CMV-22B is that the Naval Aviation Warfare Development Center at Fallon Naval Air Station is already anticipating the arrival of the CMV-22B within the fleet and are looking within their focus on training the integratable air wing to the coming of the new aircraft. To be clear, the C-2 has never been part of NAWDC or its predecessors.
I had the chance to see the CMV-22B at the reveal ceremony held in Amarillo, Texas on February 7, 2020 where I first met Capt. Dewon “Chainsaw” Chaney, the Commander of COMVRMWING (or Fleet Logistics Multi-Mission Wing), and most recently I visited his command in North Island, San Diego on July 13, 2020. As Captain Chaney put it in his address to the audience at the reveal ceremony in February 2020: “CMV-22s will operate from all aircraft carriers providing a significant range increase for operations from the Sea Bases enabling Combatant Commanders to exercise increased flexibility and options for warfare dominance. If you’re in a fight, it’s always good to have options! Every month following the first initial deployment, there will be a CMV-22 detachment operating with a US aircraft carrier somewhere in the world.”
During my visit to North Island, I had a chance to discuss the way ahead with “Chainsaw” for his command in terms of putting the Osprey squadrons in place. The first squadron VRM-30 was stood up prior to the creation of the Wing and its first aircraft arrived in June 2020. Captain Chaney then noted that this October, the fleet replacement squadron, VRM-50, will be stood up. It will take this squadron two years until they will be able to train new pilots. The counterpart to
VRM-30 will be VRM-40 but all three squadrons will be under the COMVRMWING. The third squadron will be based on the East Coast.
Captain Chaney concluded: “I do believe that the Navy is really going to appreciate the capabilities that the CMV-22 is going to bring to the strike group, and they’re going to want it to
do more.”
Check out more sponsored articles.
How AI Can Help Integrate Allies And Partners In The Indo-Pacific
One of the challenges in integrating the U.S. and its allies and partners in the Indo-Pacific is that there is a great deal of complexity in how a potential adversary might engage each of the different countries in different ways leading up to a conflict——tactically, strategically, economically, and politically. And there is just as much complexity in how each country might respond in its own way.
It is difficult for wargaming and exercises to fully capture this complexity, with its clues to effective mission-partner integration. However, an emerging form of AI known as reinforcement learning can play an important role. Essentially, this technology makes it possible for each country in a virtual wargame— whether an adversary, the U.S., an ally, or a partner—to be represented by its own AI “agent.”
Each agent—a sophisticated algo- rithm— brings together and analyzes vast amounts of data about that country, including its military capabil- ities, its political and economic environment, and its posture toward the other nations. A unique feature of reinforcement learning is that allows the AI agent to pursue its own best interest, so that in a wargame repre- senting a country, the AI behaves much like that country would.
This can provide valuable insight into the often-difficult challenges of mission-partner integration. For example, an AI agent representing a critical partner in the Indo-Pacific might discover, over multiple scenarios, that certain security cooperation activities would likely elicit economic or diplomatic pressures from an adversary, and that the best course of action would be to disengage and remain neutral.
Or, the AI agent might find that that if allies or partners have certain defensive weapons or other protections in place before a conflict, that would deter—or at least defer—adversary aggression. Such AI-informed scenarios can help map out the steps needed to make sure our allies and partners get the capabili- ties they to maximize deterrence.
Defense organizations are already beginning to use reinforcement learning in operational planning, by wargaming how opposing forces might engage tactically in battle. But rein- forcement learning can go even further, by helping to integrate the U.S. and its allies and partners in the Indo-Pacific through all phases of competition, crisis, and conflict, to help create a force of forces.
How Reinforcement Learning Works
With reinforcement learning, algo- rithms try to achieve specific goals, and get rewarded when they do. Using trial and error, the algorithms test out random possible actions. The closer those actions get the algorithms to their goals, the higher their score. If the actions move the algorithms way from their goals, the score drops.
In this way, the algorithms can rapidly work through thousands or even hundreds of thousands of scenarios, in a game-like setting, to determine the best course of action. With each iteration, they learn more about what works and what doesn’t, and get closer and closer to the optimal solution.
Because the algorithms can perceive their environment in a virtual wargame, and participate autono- mously, they are considered to be AI agents. And reinforcement learning is well suited for wargaming. An AI agent
can take a side and play a role, trying to achieve its own specific goals and learning as it goes along. Just as important, multiple agents in a wargame—for example, representing various allies and partners in the Indo-Pacific—can learn how to best work together to achieve common goals in the face of an adversary.
Virtual wargaming is just one example of how reinforcement learning can assist defense organizations. It can also help optimize weapons pairing, the kill chain process, cybersecurity, and other challenges.
How Reinforcement Learning Is Trained
The process of integrating allies and partners with reinforcement learning begins by bringing together a wide range of data about a particular country. In addition to information on the country’s military and other resources, it can include its recent history—for example, how an ally’s economy and politics were affected by outside pressures in the past, and how the country responded when faced with certain pressures from an adversary. All this information teaches the AI agent what kinds of actions it might see from agents representing other countries, and what kinds of actions it can take on its own.
At the same time, the AI agent is provided with that country’s goals, based on the knowledge of experts on its culture, politics, economy, military, and other areas. The agent is then programmed to use the actions at its disposal to achieve those goals. While it may be impossible to capture the full picture of a country—or the complete international environment—even limited AI agents, interacting with one another, can provide important insights. And as new information about countries is added into the mix, AI agents continually learn.
Reinforcement Learning In Action
In a virtual wargame, AI agents for the adversary, the U.S., and various allies and partners enter a scenario and begin interacting with each other autonomously—each balancing its own strengths and weaknesses to achieve its goals the best way possible. In one scenario, for example, an adversary might try to use economic or diplo- matic coercion against a number of different allies and partners at the same time, or launch sophisticated disinformation campaigns designed to pit countries against one another and break apart the coalition.
With each country pursuing its own best interest, the AI agents can reveal how they might work together against the adversary, or splinter from the others. A partner in the Pacific might decide to provide some assets to the coalition, but not others. An ally might be particularly susceptible to an adversary’s disinformation campaign, and refuse to cooperate with other allies or partners. These kinds of scenarios can suggest actions the U.S. and its allies and partners might take, which they can then try out as the virtual wargame continues.
A wargame can play out with hundreds of thousands of iterations, giving the AI agents the chance to try out any number of possibilities, and find the best solutions. Throughout the process, domain experts continually verify the AI agent’s goals and actions, making sure they accurately reflect the real world.
Reinforcement learning doesn’t replace current approaches to wargaming, planning and other activities. Rather, it is a powerful tool to aid decision- making, as leaders seek to integrate the U.S. and its mission partners into a potent force of forces in the Indo-Pacific.
Lt. Col Michael Collat ([email protected]) is a Booz Allen principal leading the delivery of data analytics, counter-malign foreign influence, and digital training solutions across USINDOPACOM. A former Air Force intelligence and communications officer, he has also led projects delivering cyber fusion processes, information operations assessments, and regional maritime and aerospace strategies.
Vincent Goldsmith ([email protected]) is a Booz Allen solutions architect providing transfor- mational technical delivery across USINDOPACOM. He focuses on wargaming, modeling and simulation, immersive, cloud, and AI solutions, and he partners with warfighters in region to integrate the latest innovative technology into their base- lines, to advance the mission.
Check out more sponsored articles.
How AI Can Help The Joint Forces With Persistent Targeting
One of the thorniest challenges in the Indo-Pacific is persistent targeting—how can the joint forces keep track of a constantly changing array of often fast-moving targets, over vast open spaces, against adversaries adept at hiding what they’re doing? How can you make sure you’re always matching up the right sensors with the right targets, and at exactly the right times, so you can maintain custody on critical targets with the needed handoff from
one sensor to the next?
These are complicated problems that require rapidly bringing together and analyzing, in real time, a growing ocean of information on both targets and sensors—something that is becoming increasingly difficult using conventional manual approaches. However, those are just the kinds of problems that artificial intelligence solutions are well suited to handle. With advances in machine learning
and other forms of AI, the joint force now has the tools and opportunity to make an exponential leap in persistent targeting in the Indo-
Pacific and elsewhere.
Gaining Situational Awareness
Establishing and improving situational awareness through the use of AI starts with a robust capability to gather, store and process large amounts of data. Fortunately, today there are data platforms that can securely bring together the full range of data that the joint forces collect on targets and sensors. These platforms can seamlessly accept data from any source, and in any format, and make it
fully available to AI and other data fusion and analytic applications.
The application of trained AI models on these large sets of data can then result in rapid target identification, factoring in current or last known locations, as well other target characteristics. These models can also correlate other sensor information about a target, such as patterns in the electromagnetic, acoustic and IR signatures.
Predicting Target Paths
Properly trained AI models also can predict where targets are likely to go, so operators can optimize potential sensor-to-sensor handoffs to maintain persistent targeting and help commanders maneuver their forces in advance of adversary action. The AI
models do this by analyzing historical data on the adversary targets and actions, looking for behaviors and patterns, such as where those targets have gone in the past in particular circumstances. For example, when there’s a certain combination of adversary aircraft flying in a “package”—such as two tankers, four bombers and six fighters—what kinds of missions did such a group execute in the past and what flight paths did they tend to take? How have such patterns been changed in the past by our responses, and by other factors, such as the weather?
The power of AI comes from its ability to combine vast amounts of historical data with the current context from any number of sources, such as intelligence, political developments, and weather. This can then provide commanders with likely paths for targets of interest and assign confidence and probability values to the different potential target movements.
Predicting Sensor Accuracy
AI solutions can also identify which available sensors are best suited to maintain target custody, and can continuously perform sensor-target pairings, at machine speed, with automated handoffs—across large geographies with multiple targets and multiple sensors. For example, based on the historical data, which types of sensors have been most successful in tracking targets with certain characteristics? Which sensors are most accurate in a particular combination of environmental factors? AI models, for example, can account for water depth, sound-velocity profiles and arrival path in tracking a submarine, and also factor in the sensor’s position relative to the target. Such AI solutions can then help optimize the sensor-target paring, ensuring the right sensor is on the right target at the right time.
AI also can look many moves ahead, to identify the best sensors—not just for the upcoming handoff, but for the= next handoff and the next ones after that. As the targets move, AI models can continually update “best-sensorto-use” calculations, in the same way that a smartphone map application continually reconfigures for the fastest route. The ability to project a complex target-tracking scenario five, ten or twenty moves ahead at machine speed can provide commanders with a huge information edge in a rapidly unfolding scenario.
Prioritizing And Orchestrating The Sensors
It’s not uncommon that a particular sensor is needed for two different targets at the same time. How does the commander decide? Here again AI can help. It starts by evaluating the targets themselves and ingesting the commander’s target prioritization and the likelihood of the loss of target custody. For example, a commander may prioritize a highly accurate sensor for a high-priority target. But if the custody of that high-priority target can be assured with a different sensor for a short period of time, then the highly accurate sensor could potentially be re-tasked and then returned to the high priority target without any mission degradation. That would free
up the more accurate sensor to provide information on a target that might otherwise be difficult to acquire. The promise of AI is that it can sort out much of this complexity in real time to maintain persistent targeting and custody on multiple targets in an ever-changing environment. AI solutions can also deal with changing commander priorities, changing environmental factors, sensor degradation, and adversary counteractions all at machine speed—delivering the commander a synchronized battlespace-awareness plan optimized for both sensor and targets.
These AI solutions also learn over time. As they get “smarter,” they can better sort out which combinations of sensors are most effective at tracking which targets and under which conditions. As models incorporate more data and the results of human decision making across many different scenarios, they will also improve anomaly detection, target path prediction, and synchronized sensor target pairing.
Staying Ahead Of Adversaries
As the battlespace in the Indo-Pacific and other areas of interest becomes increasingly complex and crowded, and as adversaries get more skillful at hiding their intentions, persistent targeting will only get more difficult. Integrating AI solutions into today’s operations can give the joint forces a strategic edge.
LT. GEN. CHRIS BOGDAN ([email protected]) is a Booz Allen senior vice president who leads the firm’s aerospace business, delivering solutions to DoD, NASA, and commercial clients. As a 34-year U.S. Air Force officer and test pilot, he flew more than 30 different aircraft types and was the Program Executive Officer for the F-35 Joint Strike Fighter Program for the Air Force, U.S. Navy, U.S. Marine Corps, and 11 allied nations.
PATRICK BILTGEN, PH.D. ([email protected]) is the director of AI mission engineering at Booz Allen, leading data analytics and AI development for space and intelligence programs. He is the author of Activity-Based Intelligence: Principles and Applications, and recipient of the 2018 Intelligence and National Security Alliance (INSA) Edwin Land Industry Award.