A spectacular cyber-attack has dramatized the potential of this type of warfare. Beginning in June 2010, computer security experts discovered a worm called STUXNET, which seems to have been designed specifically to attack Siemens industrial control systems using a particular type of software, perhaps even one particular target. Reportedly 60 percent of the computers involved are Iranian, but of course the Iranians deny that STUXNET has attacked either their uranium-enrichment plant at Natanz or their reactor at Bushehr.
A 2009 photograph of a reactor control monitor at Bushehr suggests that the Iranians have been using illegally copied software that cannot, therefore, easily be upgraded. In that case, the Iranian systems may be far more vulnerable to a cyber-attack than others. The Iranians have pointed out that many of their key computers are not connected to the internet, specifically to protect them against cyber-attack. STUXNET, however, seems to have been introduced via USB thumb drives. Windows computers typically automatically upload whatever program is on the thumb drive (so that they can accept the data compressed onto the drive). STUXNET caused the victim computer either to make an internet connection or, if that was impossible, to reproduce itself on any other thumb drive inserted into the computer. When computers are not linked, data are typically transferred by thumb drive, a method the U.S. Navy used to call “sneakernet” when it was used (with diskettes) to transfer data between different shipboard systems with incompatible security requirements.
Apparently, it is impossible for operators to resist inserting a USB stick they find nearby into their computer. This problem was demonstrated in Central Command in 2008, when USB drives were left (perhaps by a foreign intelligence agency) in a washroom at a U.S. base. One or more Soldiers presumably pocketed the sticks and eventually put them in their computers, infecting the entire Central Command system with a virus that took 14 months to destroy. According to some accounts, the virus in question had originated in Russia (but it was never clear whether this copy had come from there). No one in Central Command knew how many computers there were, or of what types, and that many operators had failed to install updated security software. At about the same time it was admitted that a harmless virus has operated in many or most U.S. military computers for several years, and that attempts to exterminate it have failed (it reappears after having been purged, so it is not clear where in the basic software it is hidden).
Leaving No Trace
If the STUXNET worm succeeds in triggering internet access, it reports details of the host control system to a server in Denmark or Malaysia. That computer replies with instructions to change part of the software in the control computer, for example to shut off a key valve or, in the case of a centrifuge, to reset the maximum speed. Such resets can cause catastrophic damage. Each of the thousands of centrifuges at Natanz has a software-driven controller. Each controller presumably runs the same software, and many if not all are linked so that software revised for one computer can be spread to the rest. It is not difficult to imagine the virus propagating rapidly through the system, perhaps choosing different times to affect different individual controllers.
Moreover, STUXNET is apparently designed to destroy enough of its own software (after acting) that it is difficult to be sure whether a particular computer has been affected. Some speculate that the STUXNET attack was responsible for the great delays in placing Bushehr on line, and also for reports of declining production at Natanz (and of a major nuclear accident there last year). It is impossible to say whether any of this is true. A recent Iranian announcement of the arrest of nuclear spies may be intended to explain the problems without admitting that they are due to the success of a foreign cyber-attack, against which Iran may be nearly defenseless.
The worm is described as so sophisticated that it would require the efforts of a team of up to ten experts working for about six months. Its defense against internal security systems included digital security certificates stolen from a firm in Taiwan, which likely also involved some considerable effort. The best evidence that STUXNET was a dedicated attack rather than a collective effort by hackers is that it exploited four separate security flaws in Windows, one of which had been announced but, oddly, not fixed, and three of which were previously unsuspected. Every time a virus or worm uses such a flaw, it highlights the existence of the flaw and causes its repair; no hacker, it is said, would willingly give so much away. There is no real indication of the origin of the worm. The code supposedly includes a word of biblical significance as a file name, but that could mean either Israeli origin (to tell the Iranians that the Israelis can attack them with impunity) or it could be a red herring.
New Weapon of War
STUXNET is likely not the first cyber-attack launched by a government. When the Russians invaded Georgia in 2008, the Georgian national bank was brought down temporarily by a much simpler denial-of-service attack, equivalent to barrage-jamming. The attack was widely attributed to the Russians, but nothing could be proven. Taiwanese Web sites periodically suffer from similar barrage-jamming—again, it is impossible to prove that the Chinese government in Beijing is responsible.
Nor is STUXNET the first really sophisticated cyber-attack. Such attacks are often undetected, but in March 2005 the Greeks found that someone had secretly inserted 29 unwanted programs into their cell-phone switching system. The intrusion was discovered only because some of the programs were not compatible with a software upgrade; some text messages sent by another cell phone operator were undeliverable. It took the Greeks two years to discover what had been done.
Generally, intrusive software is designed to exploit very specific features of the target software, so it reveals itself (or fails) when that software is replaced. More speculative reports claim that hardware or software is sometimes deliberately infected with a “kill switch” that can be activated remotely, for example, to disable Iraqi air defenses in 2003, or Syrian ones in 2007 (for the Israeli raid on the incomplete reactor).
Central Command changed the default setting of its computers so that they no longer automatically upload whatever software an inserted memory stick carries. Apparently no one had taken this default into account as a security problem. That should not have been surprising, given the vast variety of settings computer software embodies. Whether the change was done on a permanent basis, or whether some roguish individual can choose to reverse it, is not clear. Thumb drives cannot be done away with because many computer nets are not interconnected, either for security or compatibility reasons. However, for operational reasons it is often vital to be able to transfer information from one to another.
It is not difficult to imagine a contaminated computer in one net, say, in the service of a Coalition partner, putting a rogue program onto a memory stick onto which vital data were being inserted. Our own computer security personnel cannot expect full access to the computer systems of our partners, not least because they contain national secrets those partners may not choose to share (we have our own secrets). In some cases rogue programs may have been inserted when the computers in question were originally made.
The worst problem is that it is so difficult to be sure what a discovered virus or worm is doing in the first place. Thus it is not clear how sure we are that the long-surviving virus, or whatever was introduced into Central Command in 2008, was as harmless as advertised. Hence the United States is currently constructing a software test bed into which viruses can be introduced in order to discover what they can do to us (simpler test beds already exist).
We cannot abandon computer networks; they bring us far too much improvement in military efficiency, and they make possible our current fluid style of warfare. We must accept that our computer systems will inevitably leak somewhat, and also that after a certain amount of time our enemies can penetrate them. Perhaps the problem of cyber-security is like the problem of message security (encryption and coding). History suggests that, once a code or machine system has been compromised, total replacement is the only way to regain security. Modifications of the original system do not seem to work, because breaking it once provides the breaker with too much insight. It is very expensive to change codes or coding machines, so those responsible tend to resist such solutions (the cyber-equivalent is total replacement of both hardware and software).
Those writing about the Allied penetration of the World War II German systems commonly conclude that the more open organization of the Allies protected them from the sort of arrogance which made it impossible for the Germans to believe that their Enigma system had been broken. Unfortunately that was not the case; the British, for example, used a compromised convoy code for at least two years after there was evidence that it was no longer secure. They abandoned that code only after outsiders, in the United States, presented clear evidence and demanded a change.
We can do a lot worse than learn from the past. Periodic replacement of hardware and software would raise the bar to any enemy attacking our computers, perhaps to the point at which nothing short of an expensive national effort would work. That would be an invaluable advantage to us.