Winner, Arleigh Burke Essay Contest
New technologies offer the services the prospect of collecting and distributing massive amounts of data, but unless the needs and limitations of the human users at the end of the chain are considered, more will not be better.
Buy the steak or buy the sizzle? In our enthusiasm for the high-tech, "new and improved," we often concentrate on the sizzle and forget our digestion. Such has been the way with two heavily marketed warfare concepts, total information dominance (TID) and network-centric warfare (NCW).
Total information dominance is based on the premise that if some information is good, more must be better. The Gulf War, where commanders complained that their intelligence products did not fully meet their needs, provided impetus.1 So the intelligence community has established a goal of collecting 90% of available battlefield data.2 Vice Admiral William Owens, while on the Joint Staff, suggested coverage of 200 by 200 miles (40,000 square miles). This would be a massive collection effort, accomplished primarily by unmanned aerial vehicles, satellites, aircraft, and other sensors, generating a huge amount of data.
The complementary piece to TID is network-centric warfare. Inspired by the Navy's highly successful Cooperative Engagement Capability, the internet, and computer utilization by such companies as Wal-Mart, NCW envisions providing internet-like connectivity to users from the tactical through the strategic levels.3 Information will be processed and interchanged at truly incredible rates.
But one major element appears neglected: Where is the human?
There are fundamental questions—so far largely ignored—about how humans fit into the TID-NCW picture. The human user is the key element, yet our concentration is more on hardware, bandwidth, baud rates, and wires and electrons.
A 90% Requirement
Examine one aspect—the goal to collect 90% of available battlefield data. This has a high cost. Folklore says that the last 10% of system performance exacts 50% of the cost. Do we really need 90%? Would 80% or 71.39% be equally efficacious?
There are two fundamental reasons for collecting data on the battlefield: to be able to target enemy forces and to facilitate decision making. If the purpose of tracking 90% of the enemy's assets is to be able to target and destroy them, in most scenarios we are resource constrained. In expeditionary warfare smart weapons are scarce; in all situations they are expensive. Good judgment tells us that attrition warfare is usually an unattractive option.
If the purpose is to facilitate decision making, then we must ask, why 90%? Consider a man driving a car into his neighborhood and processing a decision on where to park. A distinctive mailbox and the right house color might be enough to trigger recognition of "my home" and finalize his decision. If the front lawn is noticed at all, the thousands of individual blades of grass are not counted or measured. Only a fraction of the available data is needed to make the decision.
Humans store a template of "my house" in memory. Just a few clues lead the mind to that template, and the rest is filled in by recall. This process is confirmed by a simple experiment, where it was observed that people catch spelling errors better in the first half of a word than in the second half—the mind often recognizes the word from context and from clues in the first half of the word and fills in the rest from memory.
So do we need to count, identify, and locate 90% of its vehicles and men to trigger the stored template of "enemy brigade?" It would be reasonable to establish with some precision just how much intelligence we really need before we spend the extra odd billions of dollars to go from 80% to 90%.
Establishing the intelligence requirements to trigger identification of enemy forces is relatively simple compared to the intelligence requirements to facilitate decision making. We do not fully understand the relationship between information and military decision making.
There is ample evidence that humans form a set of causal connections between "key indicators" and conclusions. The broad mass of data generally is ignored in favor of concentrating on one or two key elements, often unconsciously. A Marine commander might have in his mind to call for naval surface fire support when the lead enemy tank reaches a certain point. He does not need (or consciously consider) the detailed locations of the other tanks in the brigade. That is not negligence but rather the unconscious employment of an important human survival characteristic: when under stress, concentrate only on what is most important.
We see this often in command situations at sea. Sometimes it works and sometimes it doesn't. For example, when the Vincennes (CG-49) shot down Iran Air Flight 655, in the last two-and-a-half minutes of that tragic engagement the captain concentrated on the key indicators of aircraft course, speed, range to target, and altitude trend (off the combat information center consoles), and radio broadcasts.4 That is a lot to consider when under stress and time constraints, but it is nowhere near the total mass of data available from the Aegis combat system. In this case, if more data had been considered—specifically that from the fire-control radar, which showed that the aircraft was gaining in altitude and higher than had been reported to the commanding officer verbally—the decision to fire might not have been made. One of the most difficult problems for commanders is to know what to know.
Data, Information, Frameworks, and Paradigms
Systems theory differentiates between data and information. Data is raw material; it needs to be processed before it becomes useful as information. Data saturation is a continual, real-life problem. For example, data was collected for thousands of years on the movement of the stars. Within this mass of numbers was all the raw material needed to deduce Newton's laws of motion or the law of gravitation. But what was lacking, until a few hundred years ago, was the basis for selecting the tiny fraction that could be used to establish powerful generalizations—a paradigm that established how to view the data, a framework for processing it and making sense of it.
This example is not an isolated one. In science, "knowing" always has meant "knowing parsimoniously." Only when scientists establish the right ways to view data—to summarize and characterize it—does the vast bulk of it become useful.
Advocates of total information dominance and network-centric warfare point to the ability of computers to sort, process, and selectively distribute information. This capability, however, is limited to elementary clerical sorting and some pattern recognition for photo interpretation. There has yet to be a computer program that can differentiate between a feint and a main effort.
In warfare employing TID and NCW, data will be as voluminous as that on the movements of the heavens. Unless there is a framework in which to view it, to understand its patterns, and to selectively concentrate on or ignore individual elements, its volume will be debilitating. Selecting that subset of data that is important and worthy of being converted into information depends on the paradigm employed. To convert TID data into information requires a paradigm for warfare—one we do not yet have.
A paradigm, or "mental model," is a fundamental mechanism by which human beings understand the world. It is a universal activity designed to extract meaning and understanding from masses of data. Humans build models that provide cause-and-effect relationships so that, in the future, when a cause is present they have a shortcut to understanding effect. For example:
Younger children continually ask, Why? They are not looking for a causal explanation in the adult sense of science. They want "connectors." They are looking for ways of filling in gaps and connecting up experience so that they get a more stable whole. . . . If there are no parents to provide the connectors . . . then the children have to create their own explanations and myths. The myths formed by adults who have no one to ask are of exactly the same nature. The history of science is full of connecting myths: "malaria" means the bad air from the swamp that gave people malaria.5
Military theory provides the connectors, explanations, and myths that allow us to make order out of the chaos of war. According to Clausewitz, "theory will have fulfilled its main task when it is used to analyze the constituent elements of war, to distinguish precisely what at first seems to be fused, to explain in full the properties of the means employed and to show their probable effects, to define clearly the nature of the ends in view, and to illuminate all phases of warfare in a thorough critical inquiry."6 Thus, one of the objectives of theory is to establish a causality model—the connectors between action and victory. We have not done that.
Today there are two schools of warfare. Attritionists believe that destruction leads to victory. Some see the U.S. Army as firmly entrenched in this school. For example, Robert R. Leonhard discusses the Army's "firepower mentality and traditional attrition strategy," as epitomized by such dictums as the "Four Fs: Find, Fix, Fight, and Finish."7 In attrition warfare a causal relationship is presumed to exist between physical destruction and victory.
In contrast, Marine Corps doctrine, as epitomized in Marine Corps Doctrine Publication-1, Warfighting, emphasizes maneuver warfare, "a series of rapid, violent, and unexpected actions that create a turbulent and rapidly deteriorating situation with which [the enemy] cannot cope. . . . the aim of maneuver warfare is to render the enemy incapable of resisting by shattering his moral and physical cohesion—his ability to fight as an effective and coordinated whole—rather than to destroy him physically through incremental attrition." Maneuver warfare establishes a causal relationship between destruction of the enemy's cohesion and victory.
Army and Marine Corps doctrine remain distinct. There is no unifying "causal model of victory" accepted by both, much less by the Air Force. Without a unifying paradigm, all the data of war—like all the data on the movement of the heavens—will remain only data. Our ability to convert it into information will be limited. Because we cannot begin to understand how to create an effective C4ISR system or to execute total information dominance and network-centric warfare, we fall back on such off-the-cuff measures as collecting 90% of the data, without a clue as to why we are doing it or how we are going to use it.
The Human Element in Decisionmaking
People make decisions based on different processes, using different information. People with "analog brains" best process pictorial and graphic information, and those with "digital brains" best process information in symbolic forms, such as words and numbers. Most individuals are a combination of the two but emphasize one.
It often has been remarked that the type of information an accountant finds compelling is not what best serves the salesman. What if we develop a similar dichotomy between our intelligence community and the operators? There are some who claim that problem exists now. Without a clear understanding of human needs, limitations, and decisionmaking processes, this is a serious difficulty.
Even with the exact same information different individuals can reach different conclusions. Consider a situation where hot water is being poured on ice cream. Four people could look at this scene and interpret it four different ways:
- Solid being destroyed by a clear liquid
- Creamy liquid being created
- Solid being converted into liquid
- Solid disappearing
The factors that determine the interpretation are external to the incoming information. This phenomenon is acknowledged in such expressions as, Where you stand depends on where you sit.
To build TID and NCW systems properly, we must understand decision processes. The fact that we have concentrated on hardware rather than humanware increases the possibility that we are building the wrong things.
Stress, Time, and Decision Making
Human decisionmaking processes change under stress and time compression. As stress increases and time is compressed, decision makers tend to:
- Concentrate more on decisions and less on situational awareness. As situational awareness deteriorates, decisions are based more and more on an obsolete understanding of the environment.
- Become serial processors—problems are handled one at a time rather than in parallel.
- Abandon prioritizations. The problems that are addressed are not necessarily the most important, but those that just happen to arrive at the right moment.
- Change decisionmaking modes—from trying to obtain the best decision (optimizing) to a more knee-jerk mode.
- Rely on a limited fraction of the available information; sometimes the critical indicators are selected more because they are familiar than because they are relevant.
- Concentrate on short-term problems and delay dealing with longer term issues.
- Make more mistakes, and yet be less likely to recognize or acknowledge errors.
- Become wedded to the existing plan and make only small incremental changes, even when abandoning the plan would be the better course of action.
- Be influenced by different motives—such as the desire not to be embarrassed in front of their group—rather than by the operational goal.
- Increase their micromanagement of subordinates or freeze up and make fewer and fewer decisions.
Many things contribute to stress and time compression, such as fatigue, low morale, frustration, and danger. Even more significant to total information dominance and network-centric warfare are work overloads and a surfeit of unprocessed or irrelevant data. Collecting too much data and dumping it on a decision maker increases stress and contributes to the deterioration of command processes.
Things get even more complicated when we consider the human physiological responses to stress. For example, studies indicate that stress influences cognitive function and memory retrieval. Two minutes after a stress event memory is normal; 30 minutes after the event memory retrieval is impaired; four hours later it is again normal.8
It is significant that our officers are never introduced to the idea that stress can impair decision processes. We do not train them to be alert for the signs of stress- impaired decision making, so how can we expect them, when they become program managers, to specify that TID and NCW include features to limit adverse human effects?
In the Iran Air incident, we can contrast the situations of two decision makers, the commanding officers of the Vincennes and of the Sides (FFG-14):
- The Vincennes was under attack by Boghammers; the Sides was not.
- The Vincennes recently had entered the Gulf and received threat briefings on potential air attacks and on rules of engagement, which emphasized not to "take the first hit." Just days before, the Vincennes had been advised that F-4s and F-14s were operating out of Bandar Abbas airfield and to "be alert for more aggressive behavior."
- When the airliner first came up on radar it was identified on the Vincennes as "unknown, assumed enemy." The commanding officer of the Sides believed the track to be commercial at the outset. The decision processes thus proceeded from two opposite mind-sets.
- The workload/time compression on the Vincennes was extreme: two simultaneous engagements, command of a formation under attack, communicating with higher command and other remote units, damage reports, and other factors involving a rapid stream of processed and unprocessed data.
In the end, based on nearly the same data, the commanding officer of the Vincennes evaluated the track as hostile, and the commanding officer of the Sides evaluated it as commercial air. This difference is understandable—and even could be considered inevitable—when one considers human cognitive limitations, decision processes, and all the factors driving the decisions.
Would more data—TID or NCW—have helped the commanding officer of the Vincennes? Even with positive identification of the aircraft type, no envisioned ISR system would have been able to determine if, for example, the aircraft had been modified to drop bombs, or if the pilot intended to intentionally crash into the ship. During that last 2 minutes and 22 seconds between the admiral's permission for "weapons free" and "birds away," the limitations driving the decision process were not data limitations but rather limitations in the human cognitive processes. More data only would have clogged an already crowded process. For more information to have been effective, it would have to have been the exactly right information, provided in a 2 minute and 22 second window.
Considerations of human cognitive behavior should be a design element in any future C4ISR, total information dominance, or network-centric warfare system. We also must understand organizational decision making and group processes. For example, groups are susceptible to "groupthink," the tendency for the group dynamic to suppress dissent. Individuals with differing views become reluctant to speak out, and information that runs contrary to group expectations is deemphasized or suppressed. Certain decision paths develop a kind of momentum that makes them difficult to derail. Groups also tend to accept higher levels of risk than individuals. Recognition of these phenomena should help structure how we design TID and NCW systems.
C4ISR that is not attuned to the human decisionmaking process is like a weapon without an aiming mechanism. We would not dream of acquiring a gun system without knowing what caliber bullet to use, how fast we could load it, how big the magazine is, and the overall rate of fire, yet we do not have a useful understanding of what information we should put into human decision makers, how fast we can load them, how much they can retain and recall, and the rate of fire of decisions.
The human is the governing factor in total information dominance and network-centric warfare. We must move to better understand warfare paradigms, human needs and limitations, and human decision processes, and design these factors into the systems at the outset. It is time to move to human-centric warfare.
Commander Zimm is on the senior professional staff, Joint Theater Analysis Group, at the Johns Hopkins University Applied Physics Laboratory. A nuclear-power-qualified surface line officer and graduate of the Naval Postgraduate School, he had 14 years of at-sea experience on carriers, cruisers, and hydrofoils prior to his retirement.
1. Conduct of the Persian Gulf War: Final Report to the Congress, Appendix C: Intelligence, April 1992. back to article
2. BGen. Carol Elliott, USAF, "The Operator's Perspective," presentation to the Military Operations Research Society Workshop Analyzing C4ISR for 2010, 27 October 1998. back to article
3. VAdm. Arthur K. Cebrowski, USN, "Network-Centric Warfare: A Revolution in Military Affairs," presentation to the 1997 Technology Initiatives Game, 8 September 1997. back to article
4. Will and Sharon Rogers with Gene Gregston, Storm Center: The USS Vincennes and Iran Air Flight 655 (Annapolis, MD: Naval Institute Press, 1992), pp. 15-16. back to article
5. Edward de Bono, Water Logic (London: Penguin Books, 1993), p. 53. back to article
6. Carl von Clausewitz, On War (Princeton, NJ: Princeton University Press, 1976), p. 141. back to article
7. Robert R. Leonhard, "Maneuver Warfare and the United States Army" in Maneuver Warfare: An Anthology, ed. Richard D. Hooker Jr. (Novato: Presidio Press, 1993), p. 42. back to article
8. De Quervain, Roozendaal, and McGaugh, "Stress and glucocorticoids impair retrieval of long-term spatial memory," Nature 394 (20 August 1998), pp. 787-90. back to article