A recent incident in Afghanistan illuminates a military aspect of the new world of integrated electronics, as exemplified by smart phones. Integration means that formerly disparate functions, such as those of a camera, phone, and Internet access, are combined seamlessly in a single device so one can feed off the others. In this case, four new attack helicopters were delivered to a U.S. base, the location of which had apparently been unknown to the enemy. Soldiers there photographed them and put the photos on the Internet, presumably using a social medium such as Facebook. Not long afterward the helicopters were all destroyed by a Taliban mortar attack. The base was secret, and it was unlikely that Afghan agents had tipped off the Taliban. What happened?
Technology was the culprit. The soldiers used smart phones to take the pictures; today about a third of all photographs taken by Americans are taken by phones rather than by stand-alone cameras. Smart phones generally incorporate GPS receivers, and they automatically tag their photos with GPS locations, which tell the user where the picture was snapped. In a digital world, that adds very little to the size of the photo, and it is usually a welcome feature—where were we when we saw that astounding thing on vacation? The phones’ GPS has made possible all sorts of applications undreamt of a few years ago. It is, for example, the inbuilt GPS that makes it possible for you to hit an icon on your smart phone and suddenly have all nearby restaurants pop up.
The integrated GPS feature can be turned off, but the default option is to leave it on; why would a consumer not want it? It may even be impossible to turn off completely, because we now live in a world in which government finds it useful to be able to track a cell phone or smart phone, a consequence largely of the same terrorism that has taken us to Afghanistan to fight the Taliban.
In this case, the GPS attachment had devastating consequences. Once the photos had been uploaded, they could be read by unintended parties—the Taliban in particular. The GPS tags, which are called metadata, could also be read. In effect, the soldiers innocently snapping photographs were broadcasting their location, and because the digital world is nearly instantaneous, the broadcast was almost in real time. Certainly the intelligence was available soon enough for the Taliban to exploit it.
The Internet has made it possible to use what amounts to radio phones without risking the kind of radio location that, in the past, caused fatal problems. In an important sense the Internet is nonlocal. That is why it is so difficult to know who launches a jamming attack on someone’s e-network, or who is trying to steal various secrets (guesses are another matter entirely). However, the miracle of GPS and the standard practice of tagging photos (including those from many stand-alone cameras) with GPS locations turns many Internet messages into potential intelligence windfalls for enemies.
It’s been a long time since mobile phones were more than just telephones, and in some ways our society is just waking up to that fact. For example, many courts have long banned cameras, which is why accounts of major trials generally feature sketches by professional court artists. However, most of us consider our cell phones and smart phones so essential that we are unwilling to be separated from them, and many court authorities are sufficiently deaf to the nature of technology that they have not realized that allowing a cell phone into a courtroom generally means allowing in a camera (federal courts do realize this, and they ban both phones and laptops).
The Afghan incident is a modern version of an old but insufficiently familiar story. The bottom line is that emitting anything electronic is dangerous, and anyone who emits thoughtlessly in a war zone is likely to be killed. This reality is often ignored because the world of electronic intelligence is so secret, both currently and, by extension, historically. That makes good sense when it applies to code-breaking, because news of a broken code leads an enemy to change it and thus wipe out many hours of work. Hence the elaborate Allied efforts during World War II to keep the Germans from suspecting that their Enigma system had been deciphered.
Only in the past few years has it become clear that the Germans suspected, but refused to believe, that their incredibly clever system had been compromised. Victims of code-breaking resist the revelation that they have been fooled, so strongly in fact that an enemy can often brazenly use code-breaking information without the victim acknowledging what is happening. Anyone who thinks that problem has to do with the nature of Nazi society ought to reflect on the fact that the U.S. Navy ignored more than a decade of fairly blatant evidence that the Soviets were reading its mail, which turned out to be the result of a combination of the Walker-family spy ring and the loss of crypto machines on board the USS Pueblo (AGER-2).
The Taliban case is, therefore, an unusual and laudable example of quick U.S. awareness of a serious electronic problem. Much of the success of the past was in simply using the fact that messages were sent to particular addresses: traffic analysis. When someone says that al Qaeda may be stirring because Internet chatter is up, he is reporting a form of traffic analysis.
Learning from History
We tend to denigrate this kind of intelligence (and to open ourselves up to it) because the spectacular achievements of the past are so often unknown. Many naval historians know that before World War II the U.S. Navy read Japan’s codes and used that knowledge to reconstruct major Japanese maneuvers. However, in most accounts the only specific fruit of that work was the realization, in 1936, that Japanese battleships were a lot faster than imagined, the consequence being the redesign of the U.S. South Dakota class. For some reason that does not rank as a world-shattering achievement.
What has received infinitely less publicity is that, using traffic analysis, the U.S. radio-intelligence team unraveled Japan’s 1930 maneuvers, which they discovered were designed specifically to test plans to counter the existing U.S. Pacific strategy—the thrust directly across the Pacific from Hawaii to the Philippines. The results really were shattering. It was obvious that the Japanese had been able to track U.S. exercises so well that they knew exactly what the U.S. Navy planned to do. That was the equivalent of what the Taliban did with the photos of the helicopters. Worse, it was clear that the Japanese plan would work. When Japan invaded Manchuria in 1931, President Herbert Hoover asked the CNO, Admiral William V. Pratt, what the Navy could do about it. Aware of the dramatic lessons learned the year before, he told the President that the Navy could do nothing. Worse, the Japanese would have good reason not to take any U.S. naval threat seriously.
This was a lot more important than technical intelligence about the speed of a Japanese battleship, and the U.S. naval leadership of the time took it seriously. There were two lessons. One was that U.S. strategy had to change. The radio intelligence explains a seismic shift in the U.S. war plan, previously not really explained, from the direct thrust into Japanese waters to the step-by-step advance used during World War II.
The other lesson was that U.S. communications were fatally insecure. The Japanese had tracked U.S. exercises (as the U.S. analysts had tracked the Japanese) by exploiting radio call signs, in effect the radio addresses to which messages were sent. The U.S. Navy promptly encrypted the addresses and adopted radio practices (the “Fox schedule”) that countered traffic analysis. It did something more profound, too. It set up a “red team” whose role was to monitor its own radio traffic to see whether it could be exploited. We now know that the red team approach succeeded brilliantly.
These incidents did not make it into the short histories of U.S. naval radio intelligence that have been widely used during the past few years. They are buried in a much longer (and physically messier) history of pre-1941 U.S. naval radio intelligence compiled some time early in World War II. It seems ironic that an achievement at least as great as the code-breaking leading up to our victory at Midway has been almost completely forgotten.
As for our own security, it helped a great deal that communication was in the hands of a limited cadre of specialists, who could and would respond to instructions to avoid dangerous practices. Not everyone had a pocket-sized radio. But incidents like the Taliban mortar attack will continue and multiply unless we get better at red-teaming ourselves and at understanding our own rapidly changing communications technology. We are unlikely to gain awareness, on the necessary wide scale, without spreading a lot more knowledge of what, in the past, was seen as the black art of exploiting enemy communication.
The story of the interwar U.S. Fleet is far from unique, and during World War II many who should have known a lot better were killed by poor communications discipline and, a lot worse, poor comprehension of what could be gained. Only in the past few years, for example, has it become clear that many of the German successes in North Africa came not from code-breaking but from simply direction-finding British command transmitters while others in the British army were chipping away at the Germans by reading their radio messages.