In May, the U.S. government circulated a wanted poster showing five members of a shadowy Chinese cyber-espionage unit. No one expects any of them to turn up in a U.S. courtroom, but the object of the publicity was twofold. First, it was intended to show the Chinese that the U.S. government takes their operations seriously, that it can and will retaliate in some unspecified way. It is as pointless to ask the Chinese (and many others) to abandon cyber espionage as it would be to seek an international treaty barring any other kind of spying. The spies would stay in business, but some naive governments would abandon counterespionage, and cease any spying of their own.
The second and probably much more important object was to raise awareness among U.S. companies that their trade secrets are being stolen via cyber espionage. Details of some techniques were released. Considerable publicity was given to the practice of “spear-phishing,” in which the attacker posts what appears to be an internal company notice on its email, such as the agenda of an upcoming meeting. When the enclosure is opened, the spear-phisher gains access to whatever the unsuspecting employee can obtain via his company email account. To avoid such attacks, companies are being encouraged to sever connections between their internal email systems and the open Internet.
Cyber defense typically concentrates on schemes to block access to unauthorized people via encryption and firewalls. That might be described as an engineering solution. The question is how far such solutions can be overcome by human action. For example, the most extreme form of security is probably to restrict access to only those whose fingerprints or other physical signatures (such as retina patterns) a system recognizes. As for the usual defense by password, senior managers notoriously impatient with security fail to adopt difficult passwords or do not change them often enough. Some have successfully demanded access to their systems via laptops or even smaller portable devices, which are either stolen or otherwise compromised. The worst published case of this type was the Indian strategic command-and-control system. Several laptops used by system developers were stolen and returned later minus their hard drives.
Spying Made Easy
The Edward Snowden case should remind us that there are always individuals who either turn bad or who can be bought. What is new is the immense volume one individual can steal. Before Snowden, probably the worst U.S. espionage case was that of Jonathan Pollard, who stole thousands of documents. That theft took time and was physically daunting. The sheer bulk involved made Pollard vulnerable to detection. A thousand documents is probably well under a million pages, each of which might equate to a few thousand bytes. Pollard’s entire haul was probably no more than a gigabyte. Moreover, to identify the documents he requested and then copied, he or an accomplice had to spend time perusing classified catalogs. The process of requesting documents one by one doubtless slowed Pollard’s progress.
Now consider Snowden. He created web-crawling software robots that searched vast archives looking for anything that might fit the specifications he laid down. These robots automatically collected material and downloaded it to Snowden. He in turn loaded his product onto thumb drives that he carried out of his office in his pocket. Unless he had been physically searched each time he left, no one could have noticed what he was carrying. It was not like searching someone’s briefcase to see whether it contained secret documents. Nor did Snowden require a mass of cameras or photocopiers anywhere. He never had to return his documents to avoid being detected; he had perfect digital copies.
The current standard thumb drive that one buys in a neighborhood copy shop has a capacity of 32 gigabytes—in the terms above, up to 32 million pages of double-spaced copy (in reality fewer pages, because memory is now so inexpensive that computer files are relatively inefficient). The difference between digital and physical files is enormous. It just does not take all that much effort to clean out entire libraries. Moreover, the operation generally leaves no evidence.
The only saving grace, if it is one, is that by using computers we now generate (and duplicate) far more numerous files than ever before. Snowden’s cyber-crawler would collect everything fitting a specification. Some human or humans had to sift that mass of material to find what was useful. Many years ago someone jokingly commented that if we simply declassified everything, no spy would have the time to find what he really needed. Computers do simplify searching, but anyone who has used a search engine knows that before very long the engine produces far more junk than real material. Certainly classification allows a searcher to winnow what is available. For example, historical researchers know that unclassified files are massive and nearly useless, whereas the formerly classified ones generally include what they want.
The other remarkable side of cyber espionage is that the spy need never get physically near his target. The five Chinese on the FBI poster never had to leave China in order to penetrate U.S. companies. On the Internet, there is no apparent distance between users. Someone in Shanghai might penetrate a U.S. company in San Diego as easily as a hacker a block from its gate, and the targeted computer probably could not tell the difference. The FBI announcement suggests that in reality location makes just enough of a difference so that the attacker’s computer, or at least its location, can often be identified.
What should we be doing about all this? We need to be much more aware of the consequences of successful large-scale cyber espionage. Our new weapons and other military technology are likely to get into hostile hands much more quickly than in the past. That is inescapable.
It seems, then, that classification of information has a limited lifetime. We have to shorten development and production cycles. That is very difficult for the physical parts of our systems, but software can be quickly modified. The U.S. Navy took an important step in that direction about 15 years ago with its ARCI (Acoustic Rapid COTS Insertion) program. In theory, ARCI began as a way of providing submarines with better computer processors in tune with the fast development cycle offered by industry (typically 18 months, sometimes faster). It was more important that new hardware made it possible to insert new software offering new capabilities (the software cycle adopted by ARCI was about twice as fast as the hardware cycle). The ARCI idea has since spread to the surface fleet and to aircraft. ARCI was conceived as a way of improving submarine capabilities without costly changes to massive sonar arrays (better signal processing was a much more potent form of upgrade), but it has gone much further.
‘Jamming on Steroids’
If one thinks of cyber espionage as signals intelligence on steroids, active cyber warfare is jamming on steroids. The most prominent example was the Stuxnet virus inserted into Iranian industrial computers to destroy the centrifuges the Iranians were using to process uranium for their bomb program. The Russians used a simple form of cyber attack (denial of service by flooding a system with requests) during their war in Georgia. Some cyber criminals use a new kind of software that freezes a target computer unless the victim pays blackmail to receive a coded key. The same software could attack vital military computer systems. Note that cyber attack often leaves the cyber weapon in the victim’s hands; it can then be adapted to some new target. The Stuxnet virus was carefully adapted to its particular target, but presumably its design principles can be deduced and applied to some other victim.
Cyber attack (non-kinetic warfare) may seem to be an entirely new capability, but it is not too different from the potential offered in the past by successful code-breaking. Once you can read someone’s mail, there must be an enormous temptation to send misleading messages. In the past, the counterargument, at least in the United States and the United Kingdom, was that the ability to read the enemy’s mail was so important that nothing should be done to tip him off. It is not clear that the Soviets harbored similar fears; they seem to have been far more willing to exploit the fruits of their own code-breaking efforts. The Chinese learned much of their military practice from the Soviets before the Sino-Soviet split of the early 1960s. Would that make them more willing to chance cyber attacks against crucial U.S. military and civilian targets?
Jamming also seems analogous to cyber attack. In the past it has often seemed a very attractive alternative to the physical destruction of, say, an incoming missile. In theory a single jammer can defeat numerous missiles, whereas a single defensive missile can be used only once. The rub is that jamming usually requires detailed knowledge of the target; it would not do, for example, to attract an enemy missile instead of repelling it. In 1968 the U.S. Navy was investing heavily in jammers as an alternative to new defensive missiles when someone pointed out that it knew virtually nothing about the bulk of the threat missiles it planned to neutralize. That is why the SLQ-32 countermeasures set was conceived as a minimum-cost device; it was accepted that it could not defeat all comers.
Like the Chinese, we almost certainly conduct cyber espionage. At the least, our own efforts can help us thwart theirs. The real question is whether we should go further into the world of non-kinetic attack. As in jamming, if we are completely familiar with the enemy’s command-and-control system, we can predict the effect of whatever we are doing. That seems to have been the case with Stuxnet. There is probably no simple Chinese equivalent. What we really want is to be able to, say, turn off their air-defense system. There have been claims that such an attack preceded the war against Iraq. In the Iraqi case, it may have been relatively easy to discover the details of their system because it was designed by foreigners we may have subverted later.
A truly indigenous system would be a very different proposition—unless it turns out that cyber espionage makes it possible to probe and thus to model it. How could we be certain that we truly understood a complex foreign command-and-control system well enough to be sure of the effect of our own attack? Moreover, how could those designing such an attack verify that they understood? It would be a bad joke if the attack intended to turn off someone’s air defense launched his strategic missiles instead. It may be worth our while to press this point in public, lest our enemies not think it through.