The world was treated to two public examples of cyber warfare in April. The most spectacular was the release of about 11.5 million documents stolen from the Panamanian law firm Mossack Fonseca, to the gross embarrassment of many of its more prominent clients. Possibly even more important was the FBI’s announcement that it had gained access to a supposedly secure Apple smartphone (taken from a dead terrorist in the San Bernardino case) thanks to a million-dollar contract with a professional hacker. Both cases carry implications for more naval-oriented cyber warfare.
It might seem that both the law firm and Apple had every reason to try to secure themselves against cyber attack. When the story of the successful Chinese attack on the Office of Management and Budget (and its millions of security-clearance files) became public, there were snickers: The government was too blasé about cyber threats and was unwilling to spend enough to protect itself. Another charge was that judges who had ruled to allow OMB employees to use their work computers for lunchtime Internet surfing had no idea of cyber reality. It does seem clear that there is a the general belief that, however naïve senior government officials may be, a law firm whose business relied heavily on secrecy would be aware of the dangers and would spend enough to avert them.
Mossack Fonseca specialized, it seems, in creating shell companies that could be used to bury money outside the country of origin. The trove of Mossack Fonseca documents went to a consortium of 400 journalists from 107 different media, who worked together for about a year before publishing exposés in mid-April. The disclosures made a limited impression in the United States because no U.S. officials seem to have been involved. In Britain it made considerable news that Prime Minister David Cameron’s father had been a client, although it is clear that the Prime Minister himself has never been involved.
It appears that Mosseck Fonseca had been less than fully vigilant against cyber attack. For example, it reportedly failed to encrypt its emails, and its main system was not replaced or patched as subtle vulnerabilities were exposed in Internet discussions. Once known, such vulnerabilities are apparently always exploited. That might make the law firm seem responsible for its rather public downfall, but there is another side to the story. Constant patching of electronic systems is difficult and may create new vulnerabilities. If the system is large enough, a patch in part of it may not be entirely compatible with other parts of the system, and the cost in operability may be intolerable. Shutting down a big system periodically to replace large pieces of it may be impossible.
‘Always Vulnerabilities’
The Apple story was played out against an ongoing fight between the company and the Justice Department. The company’s major selling point for new smartphones is that their contents, for example their call lists, cannot be recovered by anyone but the user; not even Apple itself can pry them open. Such phones are described as warrant-proof. Apple was warned that it might find itself in a position in which such a pry-proof phone had been used in a crime so dire that its policy would look mostly like an attempt to sell to criminals. At the time of the San Bernardino massacre, the Justice Department was already suing Apple to open phones involved in ongoing cases, mostly involving drugs. The point of the San Bernardino request was to find out whether the attackers, both of whom died in the operation, had been connected to other terrorist cells. There was considerable talk about the possibility that a network of sleeper cells existed, hidden unless it communicated; now it was exploiting fears of violated civil liberty to survive.
This litigation is ongoing, but the successful penetration of the San Bernardino phone suggests that Apple’s claim that its new phones are truly warrant-proof is at best exaggerated. (The company says that the San Bernardino phone was uniquely vulnerable.)
There is another way to look at both cases. There are always vulnerabilities a determined cyber attacker can exploit; the only question is how determined and how well backed the attacker is. The question is how to live with that vulnerability. It might be added that even before the cyber world, information was always vulnerable. The main difference was that for a given level of effort a penetrator generally got a lot less, simply because it was physically difficult to steal as much.
Perhaps it is time to look backward for analogies. One is to an unusual approach to security used during the Reagan administration. The administration realized that times had changed. In the past, the Soviets had relied on ideologically committed spies like the infamous Cambridge Five in England. There seem to have been many of them, and later analysis suggests that many were never caught. What ended the era of numerous ideological spies were raw demonstrations that the Soviet Union was not the benign power they imagined. For many it was Nikita Khrushchev’s denunciation of Joseph Stalin in 1956, followed by the crushing of the Hungarian Revolution that year.
After the 1950s, the Soviets continued to score successes, but now they did so with cash payments. The Reagan administration deliberately squeezed Soviet access to hard Western currency, realizing that any such squeeze would limit the number of penetrations the Soviets could undertake. More importantly, the White House understood that it could manipulate Soviet espionage priorities by establishing particularly attractive targets in the form of “black” (special access) programs. Not only did heavier classification prove to the Soviets that the targets were worthwhile, it also raised the price of penetration, since access was controlled far more stringently. It helped the Soviets themselves thought only that which was specially protected was worth getting: If they did not pay for information, it could not be valuable.
Built to Take a Hit
The fight between cyber protection and penetrators can also be likened to the fight between armor and guns. The earliest ironclads spread their armor over wide areas, because it did not take much armor to keep projectiles out. That approach collapsed, because guns quickly gained enough power to punch through relatively light armor. Much of the story of battleship design during the 19th and 20th centuries involved attempts to concentrate protection where it was needed. That usually included a large percentage of the waterline of a ship. However, if battle ranges were long enough, a ship would not be hit many times. Her waterline would not be torn up. The hits which mattered would be single hits on particularly important elements of the ship, such as her command facilities and her own batteries. The modern case of antiship missiles takes this a step further. Such missiles are unlikely to hit in numbers, and they are too small to tear up much of a ship. Either they disable through wide-area effects such as shock, or a ship can be designed to survive a few missile hits.
Both the ship and the Reagan approach to security share some interesting features. The first is that the designer or security chief has to realize that an enemy will probably score hits. He can raise the price of penetration overall, and he should, because that protects the target from being overwhelmed. In the end, however, he has to face the reality that something will get through. That is the lesson of the FBI penetration of the Apple phone: Even a very sophisticated computer company cannot provide full protection. In the battleship era, no ship was likely to be perfectly protected.
In the battleship case, part of the solution was to prioritize. To some extent security systems do that by setting different levels of security, but in practice nearly all information is protected to much the same extent. The only virtue of such an approach is that a penetrator may be buried in the mass of available information. The only encouraging part of the Mossack Fonseca story is that it took those journalists a whole year to make sense of the mass of information they were given. Had Mossack Fonseca been more cyber-sophisticated, and had it realized that it had been breached, that year could have been spent making the trove of information obsolete. The journalistic coup would have been a lot less effective had it turned out that the revelations were wrong.
Damage Control
That suggests another approach to cyber defense, which is to plant misleading information among the mass of valuable data. That is what the Reagan administration did with many of its black programs; some of those running such programs remarked that they violated physical laws. Enough of the programs were real that the Soviets could never be sure whether they were penetrating real or false ones. The violations, moreover, were apparently subtle enough that it took the Soviets time and considerable resources to get to the point of finding out.
Another lesson from ship protection is that vulnerable but essential parts of a ship can or should be dispersed, so that one hit (penetration) is not fatal. For example, many steam warships were designed with alternating boiler and engine rooms, so that a single hit could not disable them. (In some cases design errors made such dispersal ineffective.) In the case of cyber protection, that would mean limiting the damage any one penetration could cause by making separate attacks necessary to get at different parts of the mass of information. In the Reagan case, one of the virtues of creating black programs was to split up the trove the Soviets sought, so it was far more expensive for them to get what they wanted.
Of course, the best way for Mossack Fonseca to have secured itself would have been physical separation between its mass of documents and the Internet. The company was dealing with many foreign clients who found it inconvenient or even dangerous to come to Panama, however, and apparently its lawyers could not carry the relevant documents to the clients (perhaps they should have done so). Without a sophisticated computerized data base, Mossack Fonseca would not have been penetrated as it was, and its clients would have been far better served. However, the tide of computer integration seems impossible to stem, at least for now. That is not too different from past irreversible tides such as successful espionage and successful weapons. The question for us is whether we understand how to look back for relevant lessons.
Dr. Friedman is the author of The Naval Institute Guide to World Naval Weapon Systems, Fifth Edition, and Network-centric Warfare: How Navies Learned to Fight Smarter through Three World Wars, available from the Naval Institute Press at www.usni.org.