In June, the U.S. government announced that it had suffered a vast breach of computer security. The databases kept by the Office of Personnel Management (OPM) had been hacked, and some or all of the contents taken. Initially that seemed bad enough, as every federal employee was clearly vulnerable to identity theft. The main government workers’ union demanded lifetime credit checks for all of those affected. It soon became clear that the breach had a much worse national-security consequence. Apparently the records involved included those of all security investigations over the past 30 years. They had been computerized to improve efficiency, but it seemed that no one had considered the security aspect of that decision.
To some, it looked as if the announced breach, which may involve as many as 18 million people, occurred because top leadership did not take security issues seriously. The union charged that as long ago as 2007 OPM management had been told that its computer systems were unacceptably vulnerable, but that little or nothing had been done. It is demanding large damages for what its members have suffered. The breach became apparent piecemeal, as firms hired to provide security fixes discovered just how bad the problem was.
There were also countercharges. Around 2012 OPM tried to ban personal use of its computers, such as for checking emails during lunchtime. The union reportedly protested, and a labor-relations board upheld it. OPM justified its ban on the grounds that personal use might make it vulnerable to “cyberterrorism,” but it is doubtful that the labor panel realized that there was a major national-security issue involved. The union’s charge that the problem dated from much earlier may be intended to deflect attention from this particular dispute. However, it seems most unlikely that the penetration can be traced to any particular time frame or machine.
The OPM mess is one of many reports concerning penetration of systems that should not be open to public access, including power-plant control systems and even fly-by-wire aircraft controls. In each case, an Internet connection is convenient, but not essential. Without such a connection, there is no access for a hacker. With a connection, a smart attacker may be able to use an obscure pathway into a part of the system that was never intended to be accessible. In the case of aircraft, the access, if any, is via systems used to set up repairs after an airliner lands. They save considerable money by focusing maintenance only on what needs to be fixed. Power stations are connected to the Internet so that those controlling the grids know if they suddenly have to reduce power. It would of course be expensive to maintain a separate net connecting only power stations and grid-control centers. The OPM example suggests that such an expense would be well worthwhile.
Computer penetration involves a combination of system and human engineering. System engineering provides increasingly sophisticated forms of data encryption. When you read that a new firewall protects your computer system you are reading about an advance in system engineering. Computer hackers try to penetrate this kind of security by trying numerous passwords. Really sophisticated systems make that difficult by demanding that users adopt complicated passwords that are not intuitively obvious (instead of your birthday, *G$*57811%&aQzT, for example) and shutting out anyone who fails to put in the right password a limited number of times. The main security measures are the sheer complexity of the passwords and frequent changes, so that anyone who gets in one month is kept out a few months later.
This kind of barrier leads to the human-engineering approach. For example, instead of guessing the password, find someone who can be bribed. The majority of computer users are honest and will defend their systems, but often someone is not. That is aside from human problems such as choosing passwords that are easy to remember or disabling measures that shut out senior employees who fail to enter passwords properly. It is far too easy for a manager to suppose that he is safe because his system is regularly scanned for known viruses.
Gone ‘Phishing’
There are also more imaginative human-engineering approaches, the best-known of which is “phishing.” You receive a normal-looking email asking you to open something, such as a file that is supposed to be the shipping label for a package that has not been delivered or the agenda of a meeting you should be attending. This latter approach works best if the attacker has already gained access to the system, but only up to a particular security level. In both cases, the attacker can insert a keystroke counter, which reports back when the victim next enters the security code. At some point the attacker gains full access to the system, and can take whatever information he may want. If access is gained at a high enough level, there is no particular reason that the attacker will ever be detected.
To some extent, a computer system can be protected if it is constantly scanned to make sure that nothing like a keystroke counter has been installed, or by humans who may detect hostile software inserted into the system. There is, however, no guarantee that penetration will be detected at all if it is done cleverly enough. The whole point of phishing is to make it seem that nothing untoward has happened, and it is possible to remove a keystroke counter once it has done its job. A good defense is to keep changing passwords, not so much to keep intruders out as to force them to keep intruding and, hopefully, reveal themselves.
Computer systems include their human operators. Without understanding human engineering, we cannot enforce any kind of security. This is not a particularly happy state of affairs, but it is not new, nor is it special to the computer world. In the 1980s the Reagan administration tried a counterattack. It knew that Soviet espionage resources, particularly hard cash, were limited, and it had reason to believe that nearly all Soviet penetrations had to be paid for. It set up a trap, in the form of special-access (“black”) programs, which would be particularly difficult to penetrate. It knew that to Soviet eyes, the better-protected the program, the more worthwhile the attack. The black programs could absorb a large proportion of Soviet espionage money.
It must have seemed obvious that the Soviets would manage to penetrate the black programs, just as our competitors today—in the current case, almost certainly the Chinese—have managed to penetrate many of our more important computer systems, such as OPM’s. The Reagan administration accepted this reality. Along with real programs, it invented fake black programs for the Soviets to penetrate. It knew that the Soviet government believed that American technology was so good that it had to be worth copying, even if Soviet scientists who looked at some of the projects said that they were physically impossible. It appears, then, that this effort drained both Soviet espionage and R&D resources (which were also limited) by diverting effort onto impossible paths. Intense Soviet work on “star wars” may or may not have been a case in point.
‘Poison the Well’
One might apply the same logic to computer security. System-engineering defense would be seen not as absolute protection against penetration, but more realistically as a way of raising the cost of penetration. The more effective the security, the fewer enemy attackers can get in. It is unlikely that all of them can be kept out, but we can raise the price and thus limit the number of systems an enemy can attack successfully. We can also keep changing software so that a success is fleeting; the same enemy experts have to keep coming back to the same systems. No matter how many hackers exist, only a few of them are good enough to attack the most sophisticated defenses.
The Reagan-era example suggests that we have to do something else. We have to poison the well. There has to be data that leads nowhere or, even better, contains software that can poison the computer used against us. Have we done anything like this? The world of defense-computer security is far too classified for anyone to say for sure. However, any whiff of such countermeasures would force our opponents to spend far more time and effort in handling whatever data they steal. They could never be quite sure of what they had. In the case of OPM, the fear is that, given the details of everyone’s life accumulated in a security investigation, an adversary would gain even more—including continued access—by applying new kinds of human pressure. What if the really attractive files turned out to be fake? What if they were designed to attract pressure and in doing so reveal enemy operations that would otherwise remain secret? Every so often there are hints that frustrated officials have demanded some type of counterattack. This is one form it might take.
This approach would certainly apply to many commercial victims of hacking. We often read about how the XYZ corporation has had data on X million customers stolen; all of those customers have to fear personal consequences. If the database in question includes a percentage of realistic-looking dud data, anyone using the stolen version has a chance of trying to use one of the false identities. At least in theory, that use should trigger countermeasures such as arrest or, if the user is an ocean or two away, something destructive entering his computer. Obviously this is part of a continuing struggle in which really sophisticated hackers would look for signs that data was phony. At the very least, however, it would increase the human cost of attacks that, until now, have seemed painfully easy. And yes, unless managers suffer for opening doors to computer attacks they will not get the message. That too is a form of human engineering.
Dr. Friedman is the author of The Naval Institute Guide to World Naval Weapon Systems, Fifth Edition, and Network-centric Warfare: How Navies Learned to Fight Smarter Through Three World Wars, available from the Naval Institute Press at www.usni.org.