Today’s captains still must make the kinds of life-or-death decisions their forebears made. What is vastly different is the speed of decision-making. While the captains of ships such as the USS Constitution and USS Constellation often had hours or even days to make critical choices, captains now must make decisions in minutes or even seconds.
The speed of warfare today often exceeds the ability of the human brain to make the right decision. Indeed, as Dr. Alexander Kott, chief scientist at the U.S. Army Research Laboratory, put it, “The human cognitive bandwidth will emerge as the most severe constraint on the battlefield.”2 For this reason, the Navy needs big data, artificial intelligence (AI), and machine learning to give its warfighters the edge in combat, but the devil is in the details. How will investment in these cutting-edge technologies help the Navy “conduct prompt and sustained combat incident to operations at sea”?3
Curating the Data
Naval warfighters have an enormous—even overwhelming—amount of data to deal with. They need AI and machine learning to curate this data to present only that information that helps decision makers and those pulling the trigger make better decisions faster, often in the stress of combat.
The Navy has experienced a number of tragic incidents where decision makers made the wrong decision and paid dearly in lives lost—from the USS Stark (FFG-31) taking two Iraq Exocet missiles in May 1987, to the USS Vincennes (CG-49) shooting down Iran Air Flight 655 in July 1988, to the USS Greeneville (SSN-772) surfacing under the Japanese fishing vessel Ehime Maru in February 2001, and, most recently, the fatal collisions involving the USS Fitzgerald (DDG-62) and USS John S. McCain (DDG-56). While there are multiple reasons behind these accidents at sea, in every case, there was data available that—properly used—might have broken the “accident chain.”4 In all these accidents, across several decades, tragedy ensued in part because data wasn’t properly curated, analyzed, and displayed to the decision maker in a timely enough fashion.
The Navy has been at the forefront of leveraging technology to help warfighters make better decisions faster, with fewer people, in stressful situations. In the 1980s, the Office of Naval Research initiated a program dubbed TADMUS (Tactical Decision Making Under Stress) that used cognitive science to help understand how decision makers make decisions.5 This led to several prototypes (Multi-Modal Watch Station, Knowledge Wall, and others) that were beta-tested and that achieved promising results in helping decision makers improve results.6
TADMUS was good as far as it went. As Chief of Naval Operations Admiral John Richardson pointed out in remarks at the 2017 Current Strategy Forum, until recently, the technology to take enhanced decision-making to the next level did not exist. It does now, and leveraging big data, AI, and machine learning may lead to the next breakthrough in naval warfare.
AI to Perform Tasks
Technology is emerging so rapidly that it is easy to become confused regarding terms. As the Navy signals its embrace of AI, it is important to understand that this refers to AI used to do a specific task, not artificial general intelligence (AGI). This latter term also is referred to as “strong AI” or “full AI”—the ability of a machine to perform general intelligent actions. Academic sources reserve strong AI to refer to machines capable of experiencing consciousness.
The area where AI can help warfighters is best described as artificial narrow intelligence (ANI), that is, AI that helps perform specific, discrete tasks. A large part of the research in this area involves decision-making.As the Defense Science Board noted, “Autonomy delivers significant military value, including opportunities to reduce the number of warfighters in harm’s way, increase the quality and speed of decisions in time-critical operations, and enable new missions that would otherwise be impossible.”7
The Navy is using ANI in a wide variety of applications in much the same way individuals use apps on their smartphones. Applications for predictive maintenance, humanitarian assistance and disaster relief, and air tasking orders already are helping to restructure processes and make various activities more efficient. While streamlining these kinds of tasks is good, the Navy has barely scratched the surface of operationalizing AI.
Tasks Rather Than Widgets
Naval professionals are taught to overcome and adapt and to use the technology at hand to do the best job they can. This approach is laudable, but it does little to pull good technology into the fleet. Operators sometimes are in such awe of new technology that they wait for the research-and-development community to push the next shiny new object to them.
Wouldn’t it be better and vastly more effective if the operational Navy defined the kind of ANI technologies needed to help it make decisions faster than the adversary? The Navy’s research-and-development enterprise is primed to take on this task. At the 2018 U.S. Naval Institute/AFCEA WEST Conference, Assistant Secretary of the Navy for Research, Development, and Acquisition James Geurts put it this way, “If a force can harness AI to allow decision makers to make decisions faster than the adversary, it will win every time.”8
But this general aspiration for leveraging AI and machine learning begs the question of what specific tasks the Navy wants these technologies to help naval war-fighters perform. In the words of Albert Einstein, one way to “figure out how to think about the problem” is to consider what information a commander at sea needs.9 Whether it is Captain Isaac Hull seeking to take the USS Constitution into action in August 1812, or a carrier strike group commander today taking ships into a potentially contested area such as the South China Sea, a commander needs three primary things: to know what is ahead of the force; to have that information communicated back to the flagship; and to be able to make an informed decision.
While today’s naval commanders have a wealth of assets to help achieve these goals, there are gaps that AI and machine learning can help close.
To look ahead of the force to assess the tactical situation, a strike group commander can use an MQ-4C Triton unmanned aerial vehicle (UAV). Today, a Triton operator receives streaming video of what the MQ-4C sees—but he must stare at this video for hours on end, seeing mainly empty ocean.10
Using ANI, the MQ-4C can be programmed to send video only when it encounters a ship, thereby greatly compressing human workload. Taken to the next level, the Triton could do on-board analysis of each contact to flag it for possible interest. For example, if a ship is operating in a shipping lane, has filed a journey plan with maritime authorities, and is providing an Automatic Identification System (AIS) signal, it likely is worthy of only passing attention by the operator, and the Triton would flag it accordingly. If, however, the vessel makes an abrupt course change that takes it outside shipping channels or has no AIS signal, the operator would be alerted. As this technology continues to evolve, a Triton—or other UAV—ultimately could be equipped with classification algorithms to support automatic target recognition.
Once the Triton has processed the information, ANI can help determine how to communicate with the flagship. In today’s contested electronic warfare environment, different communications paths have varying levels of vulnerability. Prior to the Triton’s launch, the commander can determine the acceptable level of risk of communications intercept, as well as the risk of giving away the presence of the strike group.
Armed with this commander’s intent, and using ANI, the Triton can assess the electronic environment and determine which communications path has the least vulnerability to intercept.11 If the Triton determines the vulnerability is too high, it can fly back toward the flagship and communicate via line-of-sight UHF. Given the size and growth potential of the Triton, it could even carry a smaller UAV and launch it back to the force to deliver this surveillance information.
On board the flagship, the commander must make sense of the data his sensors have collected and then make a number of time-critical decisions. Should he continue forward, wait, or retreat? Should he scout ahead, or in a different direction? Should he call on other forces, or are his organic assets sufficient to complete the mission without undue risk to his forces? This is where ANI can make important contributions.
Should the commander choose to forge ahead and force an engagement, ANI can do what today’s rudimentary tactical decision aids cannot—offer a range of options and assess the pros and cons of each one. ANI does not—and should not—make the decision, but it provides the commander with sufficient well-curated information so he or she can make the best decision faster than the adversary can react.
The operational Navy would be well-served to focus its “desirements” for AI and machine learning into these tactical and operational bins. Armed with this information, industry—along with the Navy’s research-and-development and acquisition communities—can begin to design and field ANI capabilities to directly contribute to the fleet’s warfighting ability.
Leveraging AI and Machine Learning to Solve Today’s Challenges
The U.S. Navy “gets” that it needs to harness big data, AI, and machine learning to give its operational forces the edge in combat. Indeed, “A Design for Maintaining Maritime Superiority 2.0” charges the Navy’s four-star fleet commanders to “identify five priority war-fighting problems for AI/ML to address.”12
This is good as far as it goes. The Navy knows it needs big data, artificial intelligence, and machine learning, but it still is grappling with what it wants AI to do. This must change if the Navy is going to reap the benefits of these emerging technologies.
In the fiscal year 2019 budget, the Navy requested more than $60 million for AI and machine learning, from basic science, to prototyping, to procurement.13 Investments in basic science are laudable, but the Navy doesn’t need an improved widget to take on the challenges a carrier strike group commander faces today—to know what is ahead of the force, to have that information curated and then securely communicated back to the flagship, and to have well-nuanced choices from which to make an informed decision. Technologies already exist and can be rapidly and effectively adapted to Navy needs.
Nor is the Navy well-served by the recent plethora of “AI summits” and other exploratory events to get naval personnel enthused about big data, AI, and machine learning. That ground has been well covered and more is likely not better. The Navy should harness existing ANI technologies to meet fleet needs today.
1. Ian Toll, Six Frigates: The Epic History of the Founding of the U.S. Navy (New York: W. W. Norton and Company, 2006).
2. Keynote address, 22nd Command and Control Research and Technology Symposium, 7 November 2017.
3. See “Design for Maintaining Maritime Superiority 2.0” for the longstanding mission of the U.S. Navy. USNI News, 17 December 2018.
4. See CAPT John Cordle, USN (Ret.), “Design Systems That Work for People,” U.S. Naval Institute Proceedings 144, no. 9 (September 2018). The author analyzes the Fitzgerald and John S. McCain accidents and concludes the lack of human systems integration in designing the systems that officers and sailors on board these ships used to make decisions was an important causal factor.
5. See Janis Cannon-Bowers and Eduardo Salas, Making Decisions Under Stress (Washington, DC: American Psychological Association, 1998).
6. See, for example, Glenn Osga et al., “‘Task-Managed’ Watchstanding: Providing Decision Support for Multi-Task Naval Operations,” Space and Naval Warfare Systems Center San Diego Biennial Review, 2001; and Jeffrey Morrison, “Global 2000 Knowledge Wall.”
7. Defense Science Board Summer Study on Autonomy (Washington, DC: Department of Defense, June 2016).
8. The Honorable James Geurts, Assistant Secretary of the Navy for Research, Development, and Acquisition, keynote remarks, WEST Conference, 6 February 2018.
9. Wilber Schramm and William Porter, Men, Women, Messages and Media: Understanding Human Communication (New York: Harper and Row, 1982).
10. See Gabe Harris, Cynthia Lamb, and Jerry Lamb, “Surf the Data Tsunami,” U.S. Naval Institute Proceedings 144, no. 2 (February 2018).
11. See LCDR Jonathan Vandervelde, USN (Ret.), “Disrupt the Spectrum with AI,” U.S. Naval Institute Proceedings 143, no. 5 (May 2017), and Connor McLemore and Hans Lauzen, “The Dawn of Artificial Intelligence in Naval Warfare,” War on the Rocks, 12 June 2018. In the latter article, the authors suggest: “The Navy should start to automate dynamic frequency allocation in communications.”
12. While this call to use AI and machine learning (AI/ML) to address warfighting problems is encouraging, the publication also calls for AI/ML to solve “five priority training problems” and “five priority corporate problems.” This diversion of AI/ML funding to administrative–vice warfighting–tasks was addressed by Navy Captain Sharif Calfee in his article, “The Navy Needs an Autonomy Project Office,” U.S. Naval Institute Proceedings 144, no. 12 (December 2018), where he noted, “Conversely, the Navy is devoting the fewest resources to mission autonomy.”
13. McLemore and Lauzen, “The Dawn of Artificial Intelligence in Naval Warfare.”