On 26 October 1415, at the Battle of Agincourt, an English army of 6,000 archers, 1,000 men-at-arms, and a few thousand footmen defeated a French army five times its size. The reason—aside from Henry V's stirring St. Crispin's Day speech—was faulty threat assessment. The French, well prepared to defeat mounted knights, were decimated by the longbow. A strategy effective against one type of enemy proved fatal against another.
As the United States prepares for the 21st century, we may be setting the stage for a high-tech Agincourt. The latest revolution in military affairs (RMA), symbolized by Admiral William Owens's "system-of-systems," allows the realization of dominant battlefield knowledge—the attainment of "perfect information" that promises to lift the fog of battle. Unfortunately, no RMA can remove the friction of war completely; successive revolutions simply replace one set of antagonists with another. This one presents novel tactical problems of information overload and counter-information warfare that must be anticipated and addressed.
At the strategic level, this RMA presents a more fundamental problem. Is the United States preparing for yesterday's wars? Is major theater warfare the most plausible scenario of the future? The thought of firing precision-guided munitions against Scud missile launchers is attractive, but it presupposes both a definable aggressor and a quantifiable target. In the next war, the United States may have neither.
Changing of the Fog
Seeing with clarity the entire battlespace should, one might assume, remove all unknowns from the calculus of battle. Unfortunately, by putting the battlespace under a microscope, the system-of-systems reveals excessive information. The result is like magnifying the back of your hand; the added detail obscures the familiar and creates uncertainty, confusion, and possibly even fear. Battlespace resolution is not linear, but a continuum. When detail becomes too dense, the fog rolls back in.
This is information overload, a condition made worse by the computer age. There simply is too much for a human being to process.2 Instead, lower-resolution systems, such as computers, must preprocess data for human consumption. Because of the nature of the exercise, these systems are vulnerable to surprise. Outlying cases (e.g., the Inchon landing, the Nazis in the Ardennes, and Pearl Harbor) are pre-excluded because of their statistical improbability. The inexplicable is ignored. When the information harvest is so great, separating the wheat from the chaff becomes a Herculean task. With so much noise, the critical signal all too often is lost.
If the critical detail does come to light, it has cleared only the first hurdle. Raw information is of limited use until it can be synthesized and used to affect operations. Authors Martin Libicki and James Hazlett, for example, highlight the vast difference between access to meteorological information and determining that a particular locus of operation will be fogged in 24 hours hence (a distinction that had relevance during the Falklands campaign). In their words, "the art of operational planning is not acquired automatically with the acquisition of computers." Operational planning requires human intervention and human decision making.
Decision making in an information-rich environment poses unique hazards. Superiors may micromanage, and subordinates—with access to a richer information set—may find it easier to undertake action on their own. The risk of confusion and contention in crisis, when a clear chain of command is most important, is grave.
Confusion also mounts as individuals receive information from multiple sources asynchronously. As conflicting reports drift in, individuals may begin to weigh information according to the order it is received, rather than by the reliability of the source. This is problematic not only because first reports often are wrong but also because individuals receiving the same information but in a different order may form different conclusions. Those conflicting perceptions can have a detrimental effect on decision making.
In an information-rich world, crises no longer occur in isolation. In all but the most sensitive situations, decision makers can assume that the media will be aware of the crisis and have access to analogous information in short order. David Alberts calls this the "fish bowl" environment, and in it decision makers must appear decisive, even if their information still is limited. The fish bowl also can encourage collective decision making. Constructive dialogue and debate normally are beneficial, but the fish bowl can prevent a leader from making a decisive choice from a menu of options. Instead, the decision will coalesce around the "group consensus," to prevent the risks inherent with choice, especially if there are several different perceptions of the actual situation. The least common-denominator decision is rarely the best.
An overabundance of information can cause decisions to be made too quickly, but the opposite also is true. As battlefield commanders and their superiors become accustomed to "near perfect" information, their willingness to delay decisions until "perfection" can be attained increases. Such inactivity will be punished severely in tomorrow's battlespace. Modern McClellans and their armies will suffer greatly at the hands of those who can seize the initiative and take risks that "fish bowl" behavior precludes.
These problems illustrate the pitfalls of decision making in an information-rich world, and they will be compounded in combat by a key factor—the enemy. As a war progresses, the enemy can be expected to impede the flow of information that commanders have learned to value. Evidence suggests that this will not be difficult.
Admiral Owens's "system-of-systems" employs cutting-edge technology that is not likely to trickle down to potential aggressors for several decades, but the current RMA is not dependent on state-of-the-art systems. The fundamental concept is the linking of information to weapons, and the cost of that linkage can be very low. In other words, almost any Mom and Pop threat could participate:
[I]n contrast to earlier RMAs, such as the Nuclear RMA, or the Dreadnought RMA, the entry fee into the Information RMA is low.... most of what goes into the Information RMA can be purchased for only thousands of dollars each in world markets. Imagine what a sophisticated middle income country could do with a few thousand French and/or Russian precision guided munitions (PGMs); a few hundred unmanned aerial vehicles (UAVs) (from any of 30 countries); digital video cameras; personal computers; cellular switches, phones and pagers; GPS and pseudolite receivers; pocket radars and night vision goggles; plus archived Power-scene maps combining purchased space imagery and topography, all integrated by a few hundred U.S.-trained engineers—a Radio Shack System-of-Systems.
An inventive enemy doesn't even need a PGM or a UAV; an IBM 486 and a modem are enough to wreak havoc. The public and private communication systems that U.S. businesses have come to rely on constitute one gigantic, defenseless, strategic target:
[T]he means exist to cause significant damage and disruption to U.S. public and private information assets, processes, and systems, and to compromise the integrity of vital information .... Such an attack, or the threat of such an attack, could thwart our foreign policy objectives, degrade military performance, result in significant economic loss, and perhaps even undermine the confidence of our citizens in the Government's ability to protect its citizens and interests.
The "beauty" of such an attack is its simplicity. To halt the use of the Internet as a communication medium, an enemy does not have to hack into a computer or destroy information—he just has to deny access to the service. In effect, he creates a massive Internet traffic jam, blocking legitimate use. The attack is easy to mount, difficult to trace, and within the grasp of anyone who has a working knowledge of UNIX and TCP/IP (the main Internet networking protocol). At present, there is no adequate prevention.
One can take little solace from the fact that most military sites—at least in theory—would be immune from such incursions. A prospective enemy has achieved his strategic objectives if he halts even some civilian communication over the Internet. More important, there exist far more destructive means of information warfare.
Computer virus warfare is one of many methods now under development to destroy information flow. These viruses resemble the nuisances that affect personal computers, except that commercial antivirus software won't come to the rescue. Research in the new field of crypto-virology is developing viruses that are polymorphic and self-encrypting. These self-replicating, mutating strains can be used to mount denial-of-service and extortion attacks, where critical information in a system is encrypted and will remain so until a "ransom" is paid to the virus writer. Attempts to remove the virus will render the system useless.
We have yet to see viruses of this type in the wild, but their arrival is simply a matter of time. As information and our reliance on it grow, vulnerability to computer virus warfare and other offensive forms of information warfare (such as high energy radio frequency guns, a poor person's electromagnetic pulse) will only increase. In addition, reliance on commercial, off-the-shelf systems creates other opportunities for security breaches. In a recently reported case, U.S. intelligence officials hacked into European Union computers before negotiations on the General Agreement on Tariffs and Trade—aided by the fact that parts of the system that controls entry were made by two American firms.
To counteract these serious vulnerabilities, there needs to be an understanding of the problem and a dedication to planning exercises where information systems are degraded—or are not operational at all. Unfortunately, there is no evidence that either is now present. In fact, many of Admiral Owens's comments appear to underestimate the nature of the threat he is facing:
[T]he computer and communications technologies on which the system-of-systems is based are becoming less, not more, susceptible to the various forms of corruption and interference. A race will always exist between those who try to ensure the security of information-based systems and those who seek to overcome their security measures. Yet, the trend favors the defense.
The trend unfortunately does not favor the defense. Virus detection is an intractable problem, which makes it highly unlikely that protection systems predicated on detection will be successful. Viruses, however, are just one of many threats facing Owens's system-of-systems. To knock out a complex system, one does not need to launch a sophisticated, frontal assault; one merely hits the system's weakest link. A real threat is posed by simplistic back-door entry, either by a bored teenage hacker or a hostile aggressor.
Admiral Owens discusses the ability of the system to "gracefully degrade" if attacked. This is system redundancy and the ability of the system-of-systems to function even if one part is incapacitated. In this, the admiral is correct. Any scenario that has the entire system-of-systems destroyed is farfetched. Yet, as commanders become accustomed to information generated by a fully functioning system-of-systems, the importance of war gaming the system in a degraded mode increases. Unfortunately, the results of the few exercises that have occurred to date have not been positive. For things to improve, the mind-set that these systems are invulnerable must change. The Gulf War was a tactical aberration. The next time the United States enters battle the enemy will attack its information flow, and the results may be painful.
The United States should not abandon the system-of-systems. One cannot ignore an RMA and expect to field a viable force. The problems of information overload, decision-making in crisis, and information dependency, however, are real threats that must be addressed. We must take them seriously, and begin to practice scenarios where systems "gracefully degrade."
A Force/Threat Mismatch?
Almost all future-warfare scenarios assume a major theater warfare-level threat, with good reason: information-based warfare works best against industrial-based warfare. Yet, most analysts agree that future threats will be of the low-intensity variety. What then develops is a force-threat mismatch. The United States has high-technology, 21st-century weapons to fight a standoff war when, in fact, we will face modern-day guerrillas and peace-enforcement operations. War games are using new technology to refight Desert Storm, but few are refighting Vietnam to see if technology makes that conflict winnable.
Is this RMA a useful weapon against low-intensity threats? Former Chairman of the Joint Chiefs of Staff General Colin Powell says no:
I've had some people come up to me and brief me about the revolution in military affairs and information warfare and say, "See, we can get rid of divisions and corps and brigades. All we need is a general at SAC headquarters or STRATCOM headquarters in Nebraska and he will have perfect vision of the enemy and perfect targetability of the enemy and we will have weapons with the same level of precision and all we got to do is find the bad guys and drop a bomb on them." And I said, "I'm afraid the world isn't that simple." Because first you have to have a cooperative enemy who is willing to paint himself so that you can see him, and then to stand there long enough to wait for the bomb. In Somalia, for example, I got lots of assurances about how we could find people. They didn't emit. They just didn't emit. I would love to see the RMA go after the Taliban in Afghanistan. It . . . won't work.
To fire a PGM at an enemy, you first need a target. In low-intensity conflict, not only are you lacking a target, in many cases, you also are lacking a clear aggressor. Like the Redcoats of old, this RMA works best against an opponent who marches in a straight line:
Organizational decentralization may not totally destroy the effectiveness of RMA technology, but certainly erodes it. Saddam Hussein's Iraq or the other Third World caricatures of the Soviet Union are perfect opponents for a RMA-type military. Driven by the well-earned paranoia of tyrants, they have highly centralized military forces. This prevents coup d’état, but also limits the chance of military victory against determined advanced states. Future insurgents, terrorists, and narcotraffickers will not be so stupid.
The U.S. military is so focused on one particular type of opponent that defeat may come at the hands of a different antagonist. We are building a technological base and a force structure on the major theater warfare model, when in fact the threats of tomorrow may call for very different strategies and tactics:
A U.S. military configured exclusively for use against subnational or nonmilitary enemies would look very different from the projected Force XXI. Individual units would be small but very flexible, able to deal with enemies with a tremendous range of capabilities, from high-tech niche opponents to low-tech warlord militias. In fact, the entire combat arms component of the Army might be composed of Special Forces.
The challenge is clear. Technology does not move backward, and one ignores an RMA at grave peril. Yet, one cannot apply technology to the most convenient threat at hand. The next war will likely not take place in the tactical heaven of a desert against a stand-up, shoot-me-down enemy, but in the tactical hell of chaos where the enemy will not emit. It is in places like Somalia, Bosnia, and Rwanda, and not in Iraq, where the ultimate success or failure of our forces will be decided.
The U.S. military must tailor technology to the low intensity contingencies of tomorrow, adapting complex weapons to work in chaotic environs. To do nothing risks falling victim to a 21st-century Agincourt.
Mr. Callum is recently graduated from the Maxwell School of Citizenship and Public Affairs at Syracuse University with master’s degrees in international relations and public administration. He would like to thank Sean O’Keefe for his support in preparing this paper.