On 1 October 2020, the Chief of Naval Operations announced Project Overmatch, an initiative “to develop the networks, infrastructure, data architecture, tools, and analytics that support the operational and developmental environment that will enable our sustained maritime dominance.” This project will form the core of the Navy’s contribution to joint all-domain command and control, or JADC2, the Department of Defense–wide (DoD-wide) effort to unify information sharing across platforms and services. Project Overmatch will join the Air Force’s Advanced Battle Management System (ABMS) and the Army’s Project Convergence, two networked warfare programs already in advanced stages of development.
When DoD program managers promise to “aggressively pursue an Artificial Intelligence and machine learning-enabled battlefield management system,” they have in mind narrow-scope, domain-specific artificial intelligence (AI) tools: a better target tracking program trained from masses of real-world radar returns, or a better ground collision avoidance system trained from masses of flight telemetry. True autonomy is much more complex. An autonomous agent is a mobile self-contained sensor and/or weapon platform capable of analyzing sensor data, pursuing targets, and communicating with other agents. This is closer to what the general public thinks of when they hear “AI.”
It is unquestionable that translating the “Internet of Things” paradigm to an “Internet of Military Things” will offer immediate tangible benefits. Installing cheap, networked monitoring devices in critical ship and aircraft systems will allow the services to collect the near–real-time data necessary to developing predictive maintenance programs. Integration will be straightforward, as these passive sensors are not critical to mission performance. They do not require continuous connectivity nor do they need embedded AI components.
The JADC2 Concept and Challenges
The JADC2 concept, on the other hand, is far more ambitious. A recent Army exercise involved data sharing between satellites, ground stations, artillery, helicopters, unmanned aerial vehicles (UAVs), and manned and unmanned ground vehicles. Data analyses and targeting recommendations were provided by AI systems at multiple ground stations located thousands of miles from the battlefield. This networked warfare is a natural outgrowth of the near-20-year engagements in Iraq and Afghanistan, where early and total dominance of the airspace and the electromagnetic spectrum enabled the U.S. military to communicate with impunity (and to monitor adversary communications with impunity). An engagement with a peer adversary, on the other hand, almost certainly will take place in contested electromagnetic operational environments, which present unique challenges for networked warfighting.
JADC2 will require heavy use of wireless communications, which are vulnerable to jamming. Most tactical data links (TDLs) use frequency hopping to counter jamming; signals change carrier frequency according to a predetermined schedule that is concealed from the adversary. The NATO standard protocol used by carrier strike groups, for example, uses a 255 MHz-wide frequency band. A portion of this band is divided into approximately 50 channels, and the network switches channels about 100 times per second. To jam all channels simultaneously would require a massively powerful transmitter and would only be possible at short ranges and for short periods (though this might be sufficient for an enemy shore installation fighting a defensive war). A more likely scenario is one wherein an adversary uses reactive jamming to partially degrade communications in real time. An even more sophisticated adversary might use espionage to compromise the equipment used to generate or broadcast the frequency switching schedule, or use deauthentication attacks to prevent devices from connecting or reconnecting to the network. Ultimately, a sophisticated attack may not even be necessary: Any conflict with a peer adversary will involve closely coordinated joint operations with regional U.S. allies, many of whom do not possess the advanced electronic counter-countermeasures (ECCM) used by U.S. forces. An adversary will attack the weakest link, sowing confusion that could propagate through the entire coalition force.
Unfortunately, U.S. forces do not routinely train to operate in contested electromagnetic operational environments. There are a number of reasons for this. Radio frequency emissions cannot be contained, so in the interest of public safety, federal law limits the areas in which such exercises can be conducted. Furthermore, (because emissions cannot be contained) it is impossible to conceal electronic countermeasure (ECM) and ECCM abilities from hostile observers during such exercises. This leads to legitimate concerns that training with these tools will degrade their future effectiveness on the battlefield. Last, as always, limited budgets and training time force commanders to prioritize other training evolutions with more immediate relevance to current operational commitments.
The Role of AI and Machine Learning
The near-certainty that peer conflicts will take place in challenging electromagnetic operational environments suggests that the U.S. military readjust its research-and-development priorities. Rather than networks with autonomy, the military needs autonomy with networking. The focus should be on developing fully autonomous combat systems with the added capacity of operating in a networked data-sharing environment. This means a central role for AI and machine learning.
AI, for these purposes, is a general process of building software for pattern recognition or control that can operate without human interference. Machine learning is a subset of AI that uses statistics to build such software, in contrast with other AI approaches that rely on databases and logical inference rules. A machine-learning algorithm takes a set of training data and builds a program—the “agent”—that maps the space of inputs (sensor readings) to outputs (labels or controller settings).
Autonomous agents are still limited in their behavior. The recent successes of machine learning in computer vision, natural language processing, and robotics are, to some degree, the result of advances in algorithm design. To a much greater degree they are the result of hardware advances that enable the collection and processing of vast amounts of training data. For this reason, autonomous agents will not “learn” in the field. Online learning makes behavior too unpredictable to be deployed in life-and-death situations; it also requires additional hardware (graphical processing units or tensor processing units), which is too expensive and power-hungry to deploy in a swarm of UAVs or unmanned surface vehicles (USVs). Agents will learn in simulated environments, and data collected during engagements or live-fire exercises can be used to retrain virtual agents, which can be downloaded to field units in periodic software upgrades.
AI and Machine Learning Integration
To effectively integrate both autonomy and networking into our national defense strategy, two key guiding principles must be in effect: (1) autonomous agents must be modular, and (2) distributed systems of agents must degrade gracefully.
Autonomous Agents Must be Modular
Navigation, target recognition, and fire control all require separate, dedicated software modules to work effectively. The difficulty of each task differs widely across battlespaces: a classifier that can distinguish an Afghan civilian from an Afghan mujahid is far more complex than a classifier that can distinguish a warship from a sailboat. Modularity will allow software systems to be developed for different missions; the same drone can be used in an alpine counterinsurgency mission or a maritime force-protection role by swapping out the target-recognition software module. Modularity also allows systems to be fine-tuned by operators to adapt to changing mission priorities.
To bring this down to earth, consider the hypothetical example of a mobile autonomous fire-control system, such as a large USV armed with a Phalanx or SeaRAM weapon system. The “agent” should, at minimum, consist of separate software units to (1) detect objects, (2) classify objects, (3) track objects, (4) control the mount attitude, (5) control the firing system, and (6) control vessel movement. The degree to which each system could be fine-tuned in the field depends on the overall system design. In theory, a single agent could be trained to convert raw radar, camera, and gyroscopic sensor inputs to fire-control, throttle, and steering commands. That is, it would combine modules (1–6) into a single function. Training this agent would require millions of hours of simulated data, and the “logic” of the agent’s decisions would be completely opaque to a human operator. More problematically, it would be impossible to change any of the agent’s parameters in the field. A modular system, on the other hand, might enable the operator to adjust (among other things): the gain of a particular sensor (module 1), the value of a particular target type (module 2), the agent’s willingness to expend ammunition (module 5) or the agent’s willingness to endanger the vessel (module 6).
A well-designed system would enable a commander to write arbitrarily complex battle orders and rules of engagement that easily can be translated into parameter settings. A poorly designed system would give the operator unnecessarily fine-grained control over some modules while leaving crucial operational needs unmet.
Distributed Systems Must Degrade Gracefully
One justification for networked warfare is that it reduces the number of high-value assets. This lowers costs at all phases of the acquisition lifecycle, in turn enabling the military to innovate more rapidly. Networked warfare also makes systems more survivable. Indeed, a key goal of ABMS is to pivot from a hierarchical network with dedicated command nodes, such as AWACS, to a distributed network of low-value sensors and shooters. An adversary might quickly disrupt a hierarchical network by attacking high-value command-and-control units; disrupting a large, distributed network requires much greater effort.
Attacks on a distributed network may still degrade the communication ability of a subset of nodes. To respond to such attacks, the military needs systems that can still execute commander’s intent in situations in which some units are under central control, some are functioning semi-autonomously in a local network, and some are operating fully autonomously. Distributed systems can help fight in hostile electromagnetic operational environments, but only if these systems are capable of operating in, and smoothly transitioning between, multiple levels of autonomy. At the tactical level, agents must be able to leave and rejoin the network while still making tactically reasonable decisions when communications are degraded. Agents also must be capable of adaptable sensor fusion: making optimal fire-control and navigational decisions given access to some, all, or none of the networked radar, lidar, infrared, visual, and acoustic data streams. Most important, they must have robust IFF capabilities that include not just radar transponders, but also responses to visual and behavioral cues.
The Time for Integration is Now
Although autonomous systems offer a high degree of flexibility, the parts that are flexible are design-dependent. Developers must understand in what areas agents need to be flexible, and the only way for them to do so is by embedding with operational forces. Early integration also will be a chance to start educating warfighters and to grow an AI-savvy cadre of warriors who can effectively wield these new systems.
Contested electromagnetic operational environment exercises should become the norm. If it is impractical to use real ECM, its effects can be simulated; every large-scale exercise should include drills in primary-, alternate-, contingency-, and emergency-style communications planning. Commanders must get used to crafting battle orders and preplanned responses that are specific enough for autonomous agents to execute in a degraded communication environment.
Last, planners must keep in mind that AI does not have to be perfect. It just has to be better than not using AI. An autonomous agent will be better than a human at some tasks in some situations and worse in others; the trick is to figure out when and where it is appropriate to deploy these tools. A drone does not have to be better than a JTAC at calling in airstrikes; it just has to be better than a JTAC who has been shivering in the mud for 48 hours. Success must be measured holistically, to include intangibles such as increased situational awareness, decreased task loading on key personnel, and decreases in the number of personnel placed in harm’s way.