Building Information Security Layer by Layer

By Vice Admiral J. M. McConnell, U.S. Navy (Retired) and Edward J. Giorgio
  • The threat is evolving. The U.S. Navy traditionally does not fight criminals, terrorists, and economic competitors, but such adversaries pose significant cyberspace threats to naval operations.
  • Adversaries coexist on the same commercial networks that the Navy depends on for operations, so taking out the adversary's network by physical destruction is not an option.
  • Not all adversaries abide by the same rule of law adhered to by nations such as ours, giving them a potentially significant information warfare (IW) advantage.
  • It is increasingly difficult to distinguish between an insider and an outsider.

Changing Warfare

The United States maintains a significant conventional (platform-centric) warfare advantage over potential adversaries. This includes the use of information technology to control, guide, and synchronize those conventional assets. Our adoption of network-centric warfare attempts to extend those advantages even further; however, the dangers of this strategy and the net effect on warfare are not well understood. This uncertainty is compounded by the shift in warfare that increases the importance of defending against complex, nontraditional threats such as terrorists, criminals, and rogue states.

Should a Middle Eastern adversary assemble a naval threat to the Persian Gulf sea lanes, a platform-centric response by U.S. naval sea and air assets would be a formidable counter. But the changes brought about by information technology imply a fundamental change in future conflict more reminiscent of the Revolutionary War, when the British at Lexington and Concord lined up their red coats and the colonists stood behind trees and fired at will. The colonists changed the rules of engagement.

To be sure, organization, equipment, and experience remain essential for war fighting, but a dramatic shift is occurring away from equipment (firepower) and toward skill and experience. Defending sea-based platforms and global information assets against a skilled IW attacker pits the United States against an insidious threat. Modest but innovative investments in the hands of highly skilled personnel organized in a cooperative (not hierarchical) fashion can produce highly leveraged effects. This is the case with a computer virus. It is not unlike how the atom bomb changed the nature of strategic threats and was the precursor to the Cold War. Thus, mutually assured infrastructure collapse could be the scenario, only the adversary will have much less at stake than the United States.

This growing threat is the result, in part, of the greater number of people who have a motive and possess readily available tools. Estimates indicate that the web will conduct more than $200 billion of business by 2000. There certainly is an economic incentive for terrorism, crime, and industrial espionage, which will lead inevitably to additions to the existing tool kits for attack. These tools can be used to exploit the very same vulnerabilities on the NVI.

Against overwhelming firepower odds, but armed with the right technology, terrorists (and other complex threats) are capable of inflicting great damage on the physical assets of a large nation-state that depends increasingly on networking and electronic data storage. We should not fall into a trap that asserts that we should plan to sink ships with conventional weaponry and that the enemy will be a nation-state that will do likewise. Historically, seemingly overwhelming odds frequently have been overcome by finding the enemy's Achilles' heel.

Resulting Vulnerabilities

Recent worldwide exercises showed just how vulnerable web technology is when deployed across DoD classified networks that have seemingly protected connections to unclassified networks. Most current models presume that the enclaves within any unit, ship, or command are separated from other networks by air gaps or strong firewalls. But this distinction is sometimes violated by the existence of "virtual connections," which are achieved through a combination of system vulnerabilities and unapproved connections. Either of the above could render the entire enclave susceptible to a devastating attack. The resulting vulnerabilities are growing daily, as information consumers are demanding real-time access to multiple information and intelligence resources.

A notional taxonomy of vulnerabilities would include wiretapping, password sniffing, viruses, Trojan horses, software substitution, penetration, auditing and intrusion detection, operating system design flaws, implementation errors, and many more. Some of these vulnerabilities exist naturally in systems that protect classified information. Others need a mechanism for covert insertion, which an adversary could accomplish by hacking, by publishing web pages containing viruses, by modifying software when in distribution channels, with the assistance of an insider, by a midnight break-in, by cooperation from a network service provider, and so on. Because of these scenarios, we deploy cryptographic mechanisms such as authentication and data integrity, but it is critical to understand how easy it can be to bypass these protective mechanisms.

For example, we might have a policy that requires all new software installations to include an integrity check that uses a digital signature certified by a trusted authority. That will do little, however, to guard against a published web page with a virus. Web content is being created by nearly everyone these days, and it is impossible to ensure that all content providers are not one of the adversaries mentioned earlier.

Commercial weather services that provide DoD with real-time displays of current conditions are another example. This type of information sometimes is moved into a system of higher classification and hence is a catalyst for the introduction of malicious software. Firewalls can search for known viruses, but it is possible for a sophisticated adversary to design stealth code that the firewall will not detect. The resulting weather "display" might be digitally signed by a certified commercial weather service, but that will do nothing to stop the infection. Once the adversary gets a foot in the door, he usually can bootstrap his way up to gain full system privileges.

What about the threat of attack by an "insider?" Authorized access to information is based on having a well defined model that clearly identifies users and their privileges. Formerly, a discretionary control system used a person who made access decisions based on the need-to-know principle. Today, computers make these decisions based on a mandatory (rule-based) system. This drives large classes of people into the same access category and destroys the need-to-know principle. In addition, "insiders" now includes many people who previously never would have been given access: maintenance personnel, system administrators, etc. These two changes present the most critical security challenge to date, and it cannot be solved with technical security mechanisms. All of this must be folded into the risk equation to establish appropriate policy, procedures, and controls to mitigate the risk.

Risk Management

In evaluating existing and planned web applications for the Navy virtual intranet and IT-21, clear evidence of a cyberspace threat exists. Assessing risk, however, depends on our ability to identify the adversary and assess his capability and motivation to exploit known and unknown vulnerabilities. Addressing a nation-state threat is one circumstance, but the risk arising from a new and evolving set of complex threats (such as terrorists) leaves policymakers much less certain; just who is the enemy and what can he do?

Because we lack both a clear understanding of the adversary and risk-free technical solutions, we propose to deploy layers of protection to thwart the enemy. Thus, defense in depth has replaced a formal mathematically based approach that insisted that all vulnerabilities be corrected. The old risk-averse approach tended to place very difficult if not unreasonable restrictions on the operating environment and therefore made the introduction of new technology nearly impossible. Such thinking historically has led to gold-plated security systems that frequently are late to the marketplace. Defense in depth, on the other hand, is a risk management approach to security that accepts the chance that an attacker may get through one or two layers of defense, but the probability of the attacker getting through all layers is acceptably low. Driving this probability down can be accomplished only if each layer performs both intrusion detection and firewall functions. Potential intruders must be blocked or detected, and the appropriate action must be taken to deter the attack.

Security in depth and risk management are two pillars of current Navy policy, but measuring risk needs to be a well-reasoned approach that incorporates the identity of the adversary, intent, capabilities, and Navy vulnerabilities. The global village model of where the Internet is taking us leaves little doubt that our adversary will have the prerequisite exposure to develop the needed skills and experience. So we are left in a conundrum, trying to estimate the amount of risk we actually are taking. This, coupled with the diversity of possible adversaries, only compounds the problem.

Without any solid mechanisms to measure risk, we nevertheless must break down the threat into component parts and evaluate each. Our assumption is that both unclassified but sensitive and classified information are widely distributed across the NVI and that there exists a hierarchy of security levels. Typically, this includes the Internet, the unclassified DoD net, the secret-level net, and the top secret net. The attacker could be an outsider (e.g., simply an Internet user) or perhaps a DoD user attempting to gain unauthorized access to classified information. In practice, the risk associated with defending against these two scenarios is markedly different. Unfortunately, certified solutions that allow classified information to cross between networks are not commonly available, so we are tempted to select the "best commercial practice" for keeping networks logically separate. For example, within any given enclave, we are unaware of all the external connections (existing, planned, or unplanned) and base our risk on incomplete and changing information. We most often assume that the entire enclave is physically separate from other networks and we forget that risk accepted by one means risk assumed by all.

This is a policy dilemma for those charged with managing naval information assets. We can expect that both ashore and afloat units will continue to push for the latest application technology, putting the Navy on the cutting edge of information technology products and services. For many of those products, security enhancements and fixes will be added sometime after initial product introduction. Formerly, DoD was slow to adopt these systems, and a level of comfort was built in because of the time it took to deploy capabilities across the Navy's networks. The National Security Agency developed high-assurance point-to-point cryptography for these applications, but the new architectures, characterized by interconnected networks and applications, require much more sophisticated and complex solutions. Today, we leave much less time to develop high-assurance security solutions for commercial off-the-shelf products. Embracing the newest technology, therefore, will put us at some disadvantage relative to a more cautious adversary. The expectation is that the new offensive warfare capabilities brought about by rapidly adopting information technology will offset this vulnerability.

First Steps

The Navy Chief Information Officer has the responsibility and authority to develop the policy, standards, guidance, and strategy needed to create and sustain a living information infrastructure. The operational model includes centralized planning and decentralized execution and establishes integrated product teams (IPTs) that develop information management and information technology plans, architecture, and standards for the Navy. Within the past year, two important IPTs have reported out on information standards that have a critical impact on information protection.

The first IPT is the Functional Architecture and Concept of Operations for a Navy Virtual Intranet, and the second is the Information Technology Standards Guidance (ITSG). Both are unique in that they embrace best commercial practice and place the Navy solidly on the crest of the Internet wave. These blueprints for security have enterprise-wide characteristics that could just as easily apply to the FBI or to IBM. This is especially true for information protection mechanisms, where security standards have emerged from the Internet community with only historical traces of DoD standards. Nowhere is this more apparent than in the adoption of Public Key Infrastructure standards, which are critical components of services such as confidentiality, integrity, authentication, access control, and nonrepudiation—all essential to enterprise security across the NVI.

The Navy is Serious about Security

The foundation of the Navy virtual internet's defense-in-depth approach is the reasonable employment of strong authentication practices and access control features (including encryption) through the optimization of readily available commercial off-the-shelf (COTS) products. These requirements are levied within the model for how the layers of defense are organized. Proceeding outward from the user, the zones are defined as end user, ship, fleet, NVI, DoD nets, and the Internet. To logically separate these zones the Navy has adopted:

  • Bastion firewall hosts to protect the boundary between the NVI and the external world
  • Network intrusion filters to protect communities requiring higher isolation
  • Network-access controllers to do basic filtering of data network traffic
  • High-assurance guards to provide hardware separation between classified and unclassified information
  • Multilevel secure proxy servers to enable single terminal access to both unclassified and secret data

None of these solutions is impenetrable on its own, but in various combinations they allow the Navy to achieve security with investments commensurate with the value of the information being protected. This forms the basis of a well-designed and thoroughly reasoned risk management approach. Within and across this infrastructure, the following security requirements must be met:

  • Authentication to confirm identity
  • Access control to prevent unauthorized use/disclosure
  • Data integrity to prevent alteration
  • Data confidentiality to ensure privacy
  • Nonrepudiation to provide proof of both data origin and delivery
  • Availability to ensure timely communications
  • Auditing to record security-relevant events
  • This will require a set of core services, which the Navy already has established. Examples include:
  • Remote access controls across boundaries
  • Local access controls to specific data items
  • Encryption for stored and transmitted data
  • Encryption key and certificate management
  • Real-time auditing for intrusion detection and response
  • Malicious content detection

The defense-in-depth approach depends on these and other complementary and redundant security mechanisms, the most critical of which is the Public Key Infrastructure required for managing the encryption keys and digital certificates. Most of the above requirements and core services depend on this infrastructure, which is very complex to design, deploy, and manage. Because the security requirements exist within and across NVI elements, this infrastructure must support multiple services, each of which is implemented in multiple products. Thus, conforming to open standards will be the driving factor in obtaining solutions. This need for open-standards-based security is addressed in another Navy integrated product team, the Information Technology Standards Guidance.

The ITSG identifies standards and provides guidance for applying information technology toward the creation and sustainment of a responsive, user-friendly information management environment. Central to these standards is information protection, which includes system and information security standards and guidance. Compliance will be difficult, as many information systems are developed rapidly and fielded in an evolutionary manner, where traces of legacy systems pose constraints on the course of evolution. This is most evident in security mechanisms, which formerly were designed by DoD and are not readily available in COTS products. Trying to take evolutionary steps in the midst of a revolution in policy (adoption of COTS) presents many dilemmas, the biggest of which is the fact that COTS products have not been given sufficient evaluation and certification by DoD approving authorities. This will lead to some struggles over the pace at which they are adopted. Nevertheless, critical first steps have been taken.

The ITSG specifies security requirements, mechanisms, component pieces, best practices for those components, and recommended implementations. This level of specificity, coupled with the speed of innovation, requires that the ITSG be a living document. For example, electronic mail security is to be achieved through a specific standard, for which compliance is dependent on a host of other standards that have seen wholesale changes over the past couple of years—and there is no reason to expect this to stop. Fortunately, the ITSG contains a process for the periodic updating of the security standards.

The Navy has set the course to navigate through densely mined harbors. Space and Naval Warfare Systems Command's information warfare office already has excellent efforts under way to bring state-of-the-art security products into the service's mainstream. But information security is a difficult problem, and the Navy will not rely on any single mechanism to provide it. To compromise our security, an adversary must defeat the security mechanisms layer-by-layer. With proper defense in depth, the risk is minimized that a single flaw will leave an information system vulnerable.

Admiral McConnell is a vice president for Booz Allen & Hamilton on the National Security Team. He had a distinguished career in Naval Intelligence, culminating as Director of the National Security Agency.

Mr. Giorgio is a principal for Booz Allen & Hamilton. A mathematician, cryptographer, and cryptanalyst, he is the only person to have held both posts as chief codemaker and, subsequently, chief codebreaker at the National Security Agency.

 

 
 

Conferences and Events

2014 U.S. Naval Institute History Conference

Wed, 2014-10-01

The 2014 Naval History Conference is hosted by the U.S. Naval Institute and the U.S. Naval Academywith support from The William M...

Defense Forum Washington 2014

Newseum - Knight Conference Center

2015 WEST Conference

Why Become a Member of the U.S. Naval Institute?

As an independent forum for over 135 years, the Naval Institute has been nurturing creative thinkers who responsibly raise their voices on matters relating to national defense.

Become a Member Renew Membership