After the attacks of 11 September 2001, the concerns and sentiments of others regarding the United States began to matter more than ever before. It was not so much the extremist rhetoric of the terrorists, which we had heard before, but the recognition from friends that our policies made the attacks foreseeable. We increased our focus on recognizing the perceptions of others and how they affect not only our economics and diplomacy, but also our ability to conduct this new war and defend ourselves. The nation asked why we were targeted; we need to answer.
Perception Is Reality
Around the clock and throughout the world, the information environment is shaped by words, actions, and images involving the United States and its partners, as well as its adversaries. These in turn shape and reinforce perceptions regarding the war on terrorism and the United States overall. This information free-for-all is waged in the domain of public opinion—on radio and television, in newspapers, over the Internet, by clergy—as well as behind the scenes through government-to-government contacts, and even in societies through discourse and discussion.
Situational awareness in this information environment is not just knowledge of words, images, and actions but also, and more important, knowing how these are conveyed to and received by audiences and therefore how they are perceived. An example is Pakistan President Pervez Musharraf's August 2002 comments regarding Osama bin Laden and the 11 September attacks: "He was perhaps the sponsor, the financier, the motivating force. But those who executed it were much more modern. . . . I don't think he has the intelligence or the minute planning. The planner was someone else." Depending on the media outlet, Musharraf's comments were spun from uncertainty of bin Laden's complicity to outright denial of association.
When the United States released accuracy results from the war on the Taliban, The Times of India ran, "3 out of 4 Smart Bombs Hit Afghan Targets." The United Kingdom's Guardian said, "A Quarter of U.S. Bombs Missed Target in Afghan Conflict." Each media outlet influences its audience differently, shaping and reinforcing its perceptions. Knowing how actions and events are presented and perceived is crucial to understanding how to best affect and inform particular audiences.
In spite of the fact that the United States is the focus of much of the world's media and interpersonal discussion, we are still limited in how we convey our messages. Our intended messages must compete not only in the free market of ideas, but also in markets that are more restrictive or limited, where they can be altered or even denied by the filters of contrary or adversarial messengers. We can be countered by foreign misinformation, disinformation, and propaganda that can fit, perpetuate, and strengthen preconceived notions of the United States. Though many of these messages seem obviously false to Americans, they often are accepted by foreign audiences because they justify and feed the animosity and skepticism they have toward the United States.
In the context of the information environment, perception is reality. If, for example, an Islamic audience is told the Mossad carried out the 11 September attacks, they are inclined to believe it because of their propensity to believe the worst about Israel, even in the face of overwhelming evidence to the contrary.
During the war in Afghanistan, we tried to measure the effectiveness of our messages. As the battle turned our way, we saw our "popularity" rise dramatically. We suddenly were heroes who vanquished the Taliban and put al Qaeda on the run. Some took this to mean our messages finally were getting through. In reality, it was the success of our actions—and the perception of us as purposeful and forceful—not the messages we sent, that turned the tide in our favor. This is borne out by the fact that our popularity dropped steadily in the following months with the coverage of the Guantanamo Bay detainee handling and a resumption of hostilities in Israel, although neither was tied directly to our message. Our short-term strategic informational victory could not be correlated directly to our specific informational efforts. Once the buzz about the victory ebbed and perceptions were deflated by Guantanamo and the Intifada, our messages seemingly no longer resonated.
How to Measure Success?
In conventional military conflicts, enemy force composition can be discovered through intelligence and open sources. Degradation of adversary capabilities can be measured against the whole. In the war on terrorism, however, adversary "end strength" has little meaning. It is not who and how many you arrest, but those you miss who make the difference, and those cannot be accurately counted.
In addition, if you make an arrest, you might assume that has a deterrent value. Counterintuitively, it could incite existing or inspire new terrorists. This is not to say we should not antagonize the adversary, but the secondary effects of removing terrorists are much different from those of conventional forces.
A better measure of progress in the war on terrorism might be the larger strategic informational objectives—building and maintaining a coalition and disrupting and degrading terrorist networks. These strategic objectives, however, are not easily quantifiable. Rather, they are more readily measured and assessed by sentiment—how do people feel about our objectives? We must assess dynamic, culturally sensitive, and complex foreign audiences, from the societal to the personal level.
There are three main message categories the United States must consider in measuring the information environment vis-a-vis our objectives:
* Intentional messages that have intended outcomes. An example is a policy speech intended for both domestic and foreign audiences, such as the State of the Union.
* Intentional tactical actions that have unintended strategic consequences. An example is the July 2002 attack on an al Qaeda cell in Afghanistan that killed more than 40 people at a wedding.
* Events or actions beyond U.S. control that nonetheless shape the information environment. An example is the May 2003 Riyadh and Casablanca coordinated bombings, which bolstered the achievement of our objectives without our direct action.
Of these, only the first is planned to help shape the information environment, but all must be considered.
To measure and assess the collective psyche of an audience, you need to gauge what people think and know what influences them. Polling, surveys, focus groups, media assessment, and media metrics all play a role. All are imperfect, but they still can provide invaluable insight to inform policymakers.
Quantitative measures, such as polling and media metrics, provide valuable measurable data; qualitative assessments help capture context. Both are mutually supportive and intrinsically linked. A quantitative measure with no related assessment has little meaning, a number without context; a qualitative assessment without quantitative backing appears anecdotal and does not provide a sense of volume or impact for an issue.
Parameters of measurement and assessment to consider include:
* Frequency—How often is the measurement taken?
* Periodicity—Is the measurement weekly, quarterly, or without consistent intervals?
* Timeliness—How soon after measurement can it be confidently reported?
* Consistency—Can the measurement be compared to a prior baseline for trending?
* Representativeness—Does the sample reflect the demographic makeup and true sentiment of the audience?
* Bias and controlling sources—Is the sampled source, media or audience, influenced or controlled by the government, is it independent, and what political or ideological biases are present?
* Credibility—Is the sampled source seen as trustworthy and legitimate by audiences?
* Methodology—Is the method used recognized as rigorous and scientific by social science standards, or are there flaws and deviations that can skew the sample?
Polls versus Media
Polling is the best, pure quantitative measure that can be taken. But a poll assumes people are truthful, questions are not leading, subjects are informed, a representative demographic is sampled, and the environment allows forthright answers. Also, because polling is in effect a snapshot of how a subject feels at the time he is questioned, it assumes answers are not influenced by a headache, a specific news event, even the fact that they are being asked. Finally, what the people think (or feel) might not really matter if we are concerned more with what decision makers think.
The low-hanging fruit of gauging how a society thinks is the media. They can be consistently sampled, easily captured, have frequent and consistent periodicity, and are written from an internal cultural slant. Whether state-controlled or independent, they generally provide a gauge of governmental policy and intent. The two greatest caveats are that they are foremost in business to sell themselves, and they do not necessarily speak for anyone but themselves, even while shaping opinion. What gets published usually reflects the opinions of the editors and the elite establishment and not necessarily those of the government, opposition, military, clergy, or people. In an extreme example, Saddam's media hardly gauged the sentiment of the Iraqi people.
Media metrics are the analysis and quantification of ideas expressed in a given forum. Their sample has a much greater frequency than polling, so trends can be established more easily and the effects of outliers diminished. Media metrics also reflect a more informed point of view than polling. They are a measurement of ideas that audiences of interest are exposed to either directly through consumption or indirectly through discussion.
Of course, a media outlet is not necessarily a reflection of society. In London, The Mirror ran horrific stories on the U.S. treatment of detainees at Guantanamo Bay. It also polled its readers on-line. While not a true random sample or statistically relevant, the on-line poll revealed that 91% of more than 19,000 respondents favored the treatment of the detainees. The paper and the poll provide opposite pictures.
We cannot easily measure the effects of single actions or messages in relation to our strategic informational objectives, but the effort is essential to informing policy. By capturing and characterizing the effectiveness of U.S. strategic communication messaging, we can better calibrate our communications and understand the information environment.
To that end, the Chairman of the Joint Chiefs is championing establishment of a Strategic Communication Fusion and Production Center to marshal the government's creative production skills into a single coordination point. Further, he proposes this center be overseen by a Strategic Communication Policy Coordination Committee—an interagency policymaking working group—to better synchronize and centralize U.S. strategic communication among public affairs, public diplomacy, and information operations. The measurement and assessment roles discussed here outline and caveat a way to effect the Chairman's desire to "provide validated feedback on program effectiveness" and could form the basis of any fusion and production center capability.
Lieutenant Commander Rowe, a reserve cryptologist, is mobilized and serving on the Joint Staff, Deputy Directorate for Information Operations. He would like to acknowledge Captain Chris Guyer, U.S. Naval Reserve (Ret.), for his guidance.