The Pacific is aflame with war, and a Navy carrier strike group is about to engage the enemy. The group’s commanders need a lot of good information, fast.
Where are the threats—surface, subsurface, airborne—and which are the most dangerous, the most urgent? Which threats are in our range, but we’re not in theirs? What are the best weapons to engage them with? How much do we have to expend? Where will the logistics ships be when we need them and how do we mitigate the risk in a contested environment?
With conventional AI, the strike group might be using machine learning models for specific tasks such as faster target recognition and speeding the sensor-to-shooter kill chain. But these models, however valuable, would be providing just slices of the big picture. Commanders and intelligence analysts would not have the bandwidth to review more than a limited number of outputs.
However, the strike group now going into battle has access to an emerging form of AI—the “AI agent.” A series of these agents are putting together the big picture with hundreds of machine learning models, and tens of thousands of potential battle scenarios. And they’re stringing together and analyzing the results to answer the commanders’ pressing questions.
This goes far beyond the longstanding goal of using AI to quickly process large amounts of sensor and other data. The AI agents—which can learn, reason and plan—are figuring out the best ways of putting that data together, and even generating new data, to help the commanders achieve their tactical and operational goals.
As the battle unfolds, the AI agents are constantly updating their recommendations as new data comes in. At the same time, the commanders and analysts are interacting with the AI agents, asking new questions and rapidly getting back answers, all in plain English. What is likely to happen if we come around these islands from the east rather than from the west as planned? Given our current situation, how can we now maximize power projection with the lowest risk to the strike group?
To a large extent, the AI agents are doing the same things the commanders and analysts would be doing if they had the time and resources. And the AI agents’ answers may be the same ones the commanders and analysts would come up with. In a sense, the AI agents—collections of complex algorithms—are just doing the math.
HOW AI AGENTS WORK
What distinguishes AI agents from most conventional AI is that instead of just providing information, they work to achieve goals. Long before a conflict breaks out, AI engineers program the agents with specific goals, from the tactical to the strategic. For example, “Find the most efficient way of using the strike group’s available missiles to defeat multiple threats, while preserving overall missile load for likely follow-on threats—taking into consideration the probability of timely resupply.”
AI agents then use generative AI— essentially, highly sophisticated versions of AI chatbots—to figure out the kinds of information needed to work toward these goals. The agents also use generative AI to determine how to get that information. In some cases, this might mean stringing together a series of machine learning models that focus on different aspects of a problem. For example, one machine learning model could predict a threat’s movements, while another could predict whether the threat’s missiles could hit the carrier strike group, while still another might predict whether the strike group’s missiles could hit the threat.
The more complex the question, the more machine learning models may be needed. AI engineers build such machine learning models in advance. Then, as an actual conflict unfolds, the generative AI chooses from a vast library of models to answer various types of questions.
In the heat of battle, intelligence analysts alone may only have the time to gain insights from a small number of machine learning models. For example, two or three models might identify a group of contacts as enemy DDGs, and predict their likely targets and tactics. By contrast, multiple AI agents, using generative AI, can bring together hundreds of models to paint a more complex picture of how the overall fight is unfolding.
The AI agents, again employing generative AI, might use all of that information to run tens of thousands of battle simulations, working out the best possible courses of action to achieve the various goals, and then presenting options for commanders.
HOW PEOPLE AND AI AGENTS WORK TOGETHER
One of the key features of AI agents is that—though they are highly complex—people don’t have to be data scientists to interact with them. Shipboard commanders, intelligence analysts and others can ask questions and get back answers that are clear and useful.
AI agents can present information, including possible courses of action, in a variety of formats, such as 3-D maps and interactive graphics. And information from the agents can be integrated with ships’ weapon and other systems.
Just as important, commanders are able to keep the AI agents on track by modifying the goals, and creating new ones, as conditions change. AI agents are smart, but humans are always in charge, guiding the agents’ actions.
COMMUNICATION AND COLLABORATION
AI agents also have the ability to work with one another, sharing information, learning from each other, and collaborating on common goals. In a war, the Navy might be employing dozens of AI agents throughout the Pacific theater, all working both individually and together. And those agents may be sharing insights with similar kinds of AI agents being used by other U.S. forces.
In a way, this would be similar to commanders from all the joint forces, spread across the theater, collaborating during a wide-ranging conflict. By bringing in AI agents, the various commanders could get the information they need to quickly assess the overall operational picture, and decide collectively on courses of action. For example, the AI agents might combine their insights to prioritize enemy targets across the Pacific—based on their threats to the joint forces and allies—and to recommend how commanders can collaborate.
It’s no longer only about making sense of large amounts of data. Commanders need to see how that data can help them win the fight. AI agents can lend a hand.
REAR ADMIRAL MATT CARTER, [email protected], is a Vice President at Booz Allen and whose last assignment was Deputy Commander, U.S. Pacific Fleet. He is a leader in the firm’s Navy Marine Corps market, delivering emerging technical solutions on various Navy contracts.
LT. GENERAL STEPHEN FOGARTY, [email protected], is a Senior Executive Advisor at Booz Allen, specializing in Cyber and Military Intelligence. He is the former Commander, U.S. Army Cyber Command and J2 USCENTCOM.
AI/ML engineers Karis Courey and Ka’imi Kahihikolo contributed to this article.