Suppose, during a battle at sea— with missiles flying—an adversary is trying to disrupt a ship’s sensor- fusion system, so that it mischaracterizes what the sensors are seeing. With current cybersecurity methods, it could take minutes or even hours for analysts to determine that the system is being attacked. And by then it might be too late.
However, a new approach that uses AI could, for example, send an alert to the combat systems officer—in real time—that there is a high probability that an attack on the sensor-fusion system is underway. Armed with that information, the CSO could limit any possible damage by quickly shutting down the part of the system that looks like it might be under attack.
With this new wartime approach, AI continuously monitors system activity, and looks for patterns of attack that have been established by analysts beforehand using historical, modeling and simulation and other data. If the AI spots activity that matches a pattern of an attack, it sends an alert.
The idea is to catch even just the possibility of a cyberattack early enough in battle, when supervisors might have only seconds—not minutes or hours—to stop it. In order to spot what may be a cyberattack, the AI might only need to see the first few steps an attacker makes, perhaps before any damage is done. And that may be enough for the AI to provide the probability of an attack. Even if supervisors aren’t sure yet whether a cyberattack is actually underway, getting alerted and knowing the probability could help them keep the ship in the fight.
While this approach may not be able to protect every system on a ship—the combined networks typically generate too much data for today’s AI—it could be applied to key systems that might be likely targets of cyberattacks during a battle, such as combat, navigation and propulsion. And it could help defend those systems when they may be needed most.
Uncovering Patterns of Attack
The first step—long before a battle— is to establish possible ways that critical systems might be attacked. Information could come from several different sources, including data on how attacks on similar types of systems have played out in the past, and data generated through modeling and simulation. In addition, an emerging type of artificial intelligence, the “AI agent,” can play a valuable role.
AI agents—sophisticated software programs—try to achieve specific goals, and get rewarded when they do. An AI agent would essentially take on the point of view of a cyberattacker— for example, someone trying to disrupt a ship’s propulsion system.
Using trial and error, AI agents test out random possible actions. The closer those actions get the AI agents to their goals, the higher their score. If the actions move the AI agents away from their goals, the score drops.
With each iteration, the AI agents learn more about what works and what doesn’t, and get closer and closer to the most effective course of action. Essentially, the AI agents reverse-engineer how cyberattacks might unfold.
Once potential patterns of attack have been established, another type of AI—machine learning—can look for them in current system activity, and alert supervisors in real time.
Factoring in The Impact of an Attack
Another piece of information that could help supervisors as they make their decision: the likely impact of an attack. Modeling and simulation might determine, for example, that a successful attack would probably have minimal impact, because of backup systems or other protections.
So, say that during a battle, the AI finds that there is a 50-50 chance that an adversary has infiltrated a ship’s command-and-control system. Supervisors could shut down part of the system to keep the attack from spreading, though that might limit some aspects of command and control. However, if supervisors are faced with that 50 percent chance and also a low damage probability, they may decide it would be worth the risk to keep the command-and-control system fully operational.
Both pieces of information—the probability that an attack is underway and the probable impact if it succeeds—would be presented to supervisors, in seconds, on dash- boards on their computers.
Defending Against Insider Attacks
In addition to quickly detecting intrusions from the outside, this new wartime cybersecurity approach could be used to defend against insider threats. With attacks from the inside, the focus would be on patterns of user behavior.
Such patterns may not be readily apparent—there are any number of ways, for example, a crew member who is a foreign operative might try to disrupt a navigation system. And in the heat of battle, a ship’s various networks might be used in new and perhaps creative ways as operators adjust to rapidly changing conditions. Key officers, such as the navigator, chief engineer, or TAO, may not be able to tell whether someone is trying to damage the ship—or save it.
An alert might tell a supervisor that there is an 80 percent chance someone at a particular console is trying to disrupt the navigation system. The supervisor could then decide whether to order a watch- stander to quickly investigate.
With this new approach to wartime cybersecurity, defense organizations can improve their ability to stop both external and internal cyberattacks in the heat of battle—when supervisors might have only a few seconds to act.
Vice Admiral Roy Kitchener ([email protected]), a senior executive advisor at Booz Allen, served as Commander, Naval Surface Forces/Naval Surface Force, U.S. Pacific Fleet. During his 39 years of service, his commands included destroyers,
a cruiser and an expeditionary strike group.
Captain Alan MacQuoid ([email protected]) is a leader in weapon systems and critical infrastructure cyber risk assessment and mitigation efforts. He has over 35 years of experience integrating kinetic and non-kinetic effects with emphasis on cyber across all domains of warfare.
Kevin Contreras ([email protected]) leads Booz Allen’s delivery of digital solutions for the rapid modeling, simulation, and experimentation of multi-domain concepts for DoD and global defense clients.