During World War II, Chief of Naval Operations Admiral Ernest King is alleged to have remarked to a staff officer, “I don’t know what the hell this ‘logistics’ is that Marshall is always talking about, but I want some of it.”1 Admiral King, as a professional naval officer for 44 years, likely well understood the extensive logistic requirements to keep a fleet in operation, and that logistics is so fundamental to every facet of military strategy and operations that it is indisputably “commanders’ business.” It is, therefore, likely this quotation is either apocryphal or taken out of context. Perhaps King was gently mocking the bookish General George C. Marshall’s use of a fancy new term for an ancient challenge or firing for effect on his own staff about issues that should already have been foremost in their minds and within their authority and ability to solve.2
Artificial intelligence (AI) might be as central to future (even near-future) warfare as logistics has been to all warfare, yet the corresponding “I don’t know what the hell it is, but I want some” mentality appears to be more pervasive and authentic than the Admiral King quote. Calls to invest in, leverage, and “win the competition” in AI abound in strategy documents, white papers, and public remarks; however, the specific organizational problems to be solved are rarely discussed. Indeed, for a discipline that has existed for almost 70 years—nearly as long as nuclear propulsion or the hydrogen bomb—there is still little framework for how it should be applied in military contexts.3 Certainly, many Department of Defense (DoD) programs and fielded systems during that time frame have benefited from AI techniques, and DoD has sponsored a great deal of fundamental AI research, but there is still a mismatch between the widely acknowledged importance of AI on the future battlefield and the military’s readiness to incorporate these systems into war plans.
Capabilities, Not Technologies
AI technologies provide useful insight into the “how” of addressing complex problems, but, like all military planning, the “what” must come first: a clear definition of the required effects. Autonomy, sophisticated decision support, automated data filtering, dynamic training simulations with adaptive critiquing and feedback, relentless red-team “attacks” on U.S. assumptions and vulnerabilities are the specific capabilities that will make a military force more resilient and lethal. AI technologies undoubtedly will play a role in these, as the problems the military faces are far larger than their AI aspects alone. They are deeply intertwined with the human aspects of the U.S. military and will require extensive domain experience to recognize when and to what extent AI-derived solutions are appropriate to a real-life situation. Focusing on capabilities instead of technologies should help keep acquisition efforts targeted on the solution to real problems and maintain a balance of human and machine responsibilities as systems and threats evolve.
The single-minded pursuit of technologies over capabilities or capacity is, of course, how the United States wound up with fifth-generation fighters that each cost more than an entire squadron of the fourth-generation fighters they have only partially replaced, or “advanced” naval gun systems that are abandoned after construction because each round of ammunition would cost as much as a longer-range, more capable Tomahawk missile.4 In an era of intensifying strategic competition, coupled with accelerating commodification of advanced technology, the United States can afford only a certain number of such mistakes before it cedes too much ground to thriftier competitors. If DoD gets carried away by tech demos and unfulfilled promises of new AI techniques, without measurable progress toward meaningful military capability, it may be unpleasantly surprised in the next kinetic conflict.
Furthermore, because AI explicitly addresses challenging problems that would otherwise be tackled by humans alone, useful systems cannot be developed in isolation from the personnel whose work they might streamline or replace. Upgrading an older missile or turbine to a newer, more capable model usually does not require major changes to the organizational structure or mission, function, and tasks of its operators, but taking full advantage of improvements in AI-assisted decision systems or autonomous platforms often will.
It is unlikely an outside vendor will be able to provide turnkey AI solutions, no matter how far the technology evolves: Any tool that aims to change how the U.S. military thinks (or who does that thinking) will need to coevolve with the human portion of the organization. Military leaders simply cannot afford to not “know what the hell this is” when it comes to AI; they need a clear understanding of how AI components and human brains will work together to establish specific capabilities and complete specific missions. The coevolution mindset is vital because there likely will not be any sudden, game-changing leap in AI technology; instead, functions will gradually shift from human to machine as the former tinkers with and develops confidence in the latter. Military subject-matter experts will need to be the tinkerers, not merely consumers.
There’s Nothing Artificial About Autonomy
When will DoD be ready to grant mission command to a machine? Obviously, partially autonomous systems are already here and will continue to evolve in capability, and their battlefield importance will only grow. Every country watching Russia’s war against Ukraine will have noted the impressive effects Ukraine has achieved with piloted drones and will have identified the electronic links between drone and pilot as the most vulnerable and exploitable part of the system. Every technologically advanced military will be critically reviewing its electronic warfare capabilities and investing in systems that can quickly pinpoint and strike an emitter, as well as methods to decode, disrupt, and/or subvert the drone control signals.
The inevitable conclusion is that in the next high-end conflict, drones (including surface and subsurface systems) will have to operate longer and through more complicated tasks without radio transmissions, which will require more advanced processing of onboard sensor data and more intelligence classification of detected threats and opportunities. The battlefield pressure to rely on autonomous operation will be intense, regardless of whether the technology is ready.
The problem is especially acute for democratic countries, in which fully autonomous lethal weapon systems may not be ethically viable outside tightly controlled depopulated zones or emergency situations. Competing states with different attitudes toward popular opinion, the law of armed conflict, and civilian casualties could have fewer barriers to employing such systems and may perceive them as an opportunity to build an asymmetric advantage over the West. The United States and its allies therefore need not just win to the technical race to field more autonomous systems, but also to sustain a high enough performance and reliability standard to keep these systems employable in practice.
Fortunately, autonomy is far more than technology; it is a frame of mind, up and down the chain of command. Units must be well trained to make good use of autonomy, and their leaders must be confident in their subordinates’ judgment and initiative. Commanders must know, trust, and grant freedom to those junior leaders; without robust training and mutual confidence, the senior leaders are more likely to micromanage, and the junior ones more likely to freeze when they find themselves in an unanticipated situation. Trust relies on familiarity, as does the commander’s sense of when and where to impose limits on their subordinates’ freedom of action. Without deep familiarity with how AI-powered systems make decisions, and the assumptions underlying them, it will be impossible for leaders to employ those systems responsibly—just as without deeply ingrained habits of delegating mission command, it would be unlikely that leaders would employ autonomous systems effectively.
In another clear lesson from the war in Ukraine, Russian units were repeatedly overcome by indecision and paralysis when events went off-script. For example, the photo below shows a Russian armor column that simply stayed in place after its supporting fuel trucks were destroyed by a drone strike. Instead of choosing a new objective or taking up camouflaged or defensive positions, its soldiers milled around aimlessly awaiting new orders. The drones returned 24 hours later to destroy the static tanks and crews.
By framing autonomy as a whole-system capability affecting manned and unmanned platforms alike, forces are more likely to make the right choices and investments in the human side of the equation. On-scene leaders who understand the mission and feel empowered to use creativity should be able to input tailored instructions into less-than-fully autonomous equipment, and adjust their local force laydown and employment in response to the semi-intelligent systems’ demonstrated performance.
Doctrine that requires drone missions to be coordinated through a higher-echelon operations center—either because programming must be done by off-scene specialists or because the commander is not ready to relinquish detailed control—will introduce friction and delay, complex and perhaps unsustainable communication requirements, and avoidable electronic signatures, all while presenting an attractive centralized target for the enemy’s counterstrikes.
A force that fully embraces autonomy and mission command among manned units should be able to make more effective and survivable use of its unmanned systems, too, and will be better prepared to adjust a platform’s autonomy “slider” as technical capabilities evolve.
Preparing the Force
It is important to remember that artificial intelligence is fundamentally human intelligence. Even machine-learning systems evolve from rules originally specified by a human programmer, and are trained on data sets deliberately selected and filtered by human curators. Both processes are susceptible to all the usual forms of human error, including overconfidence, unconscious bias, complacency in using the tools and data most readily available instead of more appropriate or comprehensive alternatives that might require additional time and effort, and emotional attachments, sunk-cost fallacies, and external incentives that keep a project “moving ahead on schedule” despite test failures or other emerging limitations. AI must be viewed with the same skepticism as any other complex machine, regardless of how impressive initial results may appear.
It also is important to remember that the U.S. track record of judiciously evaluating automated systems is far from spotless. DoD investigations into the USS Vincennes (CG-49) misclassification and shootdown of a civilian airliner in 1988, as well as missile-defense incidents in Iraq in 2003, concluded that operators, especially under stress, were reluctant to question “the computer picture,” placed excessive confidence in automated operating modes designed only as a last-resort defense, and were reluctant to deviate from what had worked in their often oversimplified training scenarios.5 Similar overconfidence and mistakes could easily affect AI-driven systems if operators and commanders do not fully understand their limitations and vulnerabilities or are not rigorously trained to maintain a questioning attitude.
The solution will not be so simple as hiring smarter vendors and consultants or attaching one or two AI specialists to each battle staff. If AI-driven tools are to fundamentally change how the U.S. military thinks and operates, then almost every service member will need a conceptual and technical understanding of how these systems work, how and for what they have been certified, how they can be undermined, and which expectations and responsibilities remain on the human operator. Just as it did during the transitions to steam propulsion, aviation, electronics, and nuclear power, the Navy will need its own large-scale in-house training programs on the new technology—operational expertise cannot be outsourced. The expense will be considerable, but the cost of not having sufficient AI knowledge across the force would surely be greater.
The ins and outs of AI are just as much commander’s business as the details of logistics or naval engineering. No military professional should be content with knowledge gaps about how these systems work or can be subverted. However, AI alone will not solve real military problems. The conversation should revolve around capabilities, especially autonomy, that can be built up via man-machine teaming, but only when the humans understand their roles and responsibilities.
A rigorous academic foundation on AI principles and caveats eventually should become a core part of the curriculum for nearly every enlisted rating and officer specialty. When systems are fielded without adequate training or documentation, or surprising limitations emerge in real-world situations, that should be a red flag that something in the acquisition process has gotten off track. Above all, frontline leaders must reinforce to their teams that integration of AI remains a human-centric endeavor, and the fundamental principles of responsibility, accountability, and a questioning attitude are just as important to U.S. success with AI as they have been with more traditional technologies.
1. Robert A. Fitton, ed., Leadership: Quotations from the Military Tradition (Boulder, CO: Westview Press, 1990), 172.
2. The latter would be consistent with the command philosophy espoused in King’s timeless CINCLANT Serials 053 (21 January 1941) and 328 (22 April 1941); their text is available at www.ibiblio.org/hyperwar/USN/Admin-Hist/USN-Admin/USN-Admin-A1.html.
3. John McCarthy, Marvin Minsky, Nathan Rochester, and Claude Shannon, “A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence, 31 August 1955,” AI Magazine 27, no. 4 (Winter 2006). Research into the possibilities and theoretical limitations of machine reasoning predates even this conference, including Alan Turing’s seminal “Computing Machinery and Intelligence,” Mind 59, no. 236 (October 1950): 433–60.
4. See U.S. Air Force, “F-16 Fighting Falcon,” www.af.mil/About-Us/Fact-Sheets/Display/Article/104505/f-16-fighting-falcon/. F-22 “program acquisition unit cost” (including research and development activities) of $369.5 million or “procurement unit cost” (excluding research and development) of $185.7 million, (“DAMIR F-22 Selected Acquisition Report DD-A&T(Q&A),” 823-265, 24) in 2005 dollars vs. F-16 C/D “unit cost” of $18.8 million (1998 dollars); and Sam LaGrone, “Navy Planning on Not Buying More LRAP Rounds for Zumwalt Class,” USNI News, 7 November 2016.
5. Within a period of nine days, U.S. Army Patriot batteries misclassified and shot down one Royal Air Force Tornado and one U.S. Air Force F/A-18, killing three aviators, while one Patriot battery was itself misidentified and destroyed by a USAF F-16 squadron on an anti-SAM mission. COL Darrel Whitcomb, USAF (Ret.), “Rescue Operations in the Second Gulf War,” Air & Space Power Journal 19, no. 1 (Spring 2005): 97; and Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics, Report of the Defense Science Board Task Force on Patriot System Performance (Washington, DC: Department of Defense, January 2005), 4–5.