In the coming years, the joint forces will increasingly use artificial intelligence in unmanned systems in the Pacific. Many of the algorithms will be mission-specific and classified, making them potential targets of adversaries who may try to steal or disrupt them.
Protecting classified algorithms in unmanned systems in the Pacific presents a unique set of challenges. Unmanned systems may operate closer to adversaries than manned systems. And with unmanned systems, humans may not be available to detect attacks on the AI and take corrective measures.
However, by adopting a series of rigorous protections across the entire lifecycle of the algorithms—through all stages of development and deployment—and by building in resiliency, the joint forces can help keep classified algorithms in unmanned systems secure.
Protecting The Algorithms During Development
Often, many of the essential elements of a machine learning algorithm will be built in an unclassified environment, to take advantage of the expertise and innovations of the wider organization. The algorithm is then moved into a classified environment, where mission-specific and other classified elements are added.
It’s critical that algorithms be protected while still in the unclassified environment. If an algorithm is stolen, an adversary may figure out its purpose and methods—even if it hasn’t yet been configured for a specific mission—and potentially develop countermeasures.
The joint forces can help protect the algorithms for unmanned in their early, unclassified stages through government-run AI/ML factories. Instead of relying on the industrial sector—which may not apply cybersecurity consistently—these factories can impose rigorous security controls through all phases of algorithm development, including both unclassified and classified. Many defense organizations are already moving toward this level of security with other types of software factories, and they can achieve the same goals with factories that specifically develop AI and ML.
At the same time, the joint forces can require that vendors adopt a comprehensive set of cybersecurity techniques when developing algorithms. Such measures include real-time threat-sharing, so that companies can take advantage of their collective knowledge, and cyber-as-a- service, so that there is active monitoring of systems and networks rather than just snapshot audits.
Protecting The Algorithms During Transfer And Testing
Extra protection is also needed when transferring algorithms from unclassified to classified environments, and when moving algorithms between the labs doing the development and testing. The longtime practice of moving electronic information from one system to another by people— known as the “sneakernet”—carries a risk that malware could be placed on the laptops, disks and other items used in the transfers. With advances in technology, there is now more security in an infrastructure that allows direct connections between systems with different security classifications, especially on research and engineering networks.
The joint forces can also take steps to protect classified algorithms for unmanned during the testing itself. When algorithms are being tested in real-world conditions, adversaries may be able to determine how they’re being used, or even steal them. One solution is to use digital engineering to test the algorithms with modeling and simulation. This not only keeps the algorithms from being exposed to adversaries during testing—it also makes it possible to simulate cyberattacks and model different defenses.
Protecting The Algorithms During Deployment
Classified algorithms require particu- larly rigorous protections once they’re deployed in unmanned systems. If a cyberattack corrupts the data being analyzed by the algorithms—or compromises the AI/ML systems themselves—humans may not be immediately aware that something is wrong.
One way of reducing the risk is to develop automated responses to data drift or model drift. If the data coming in from sensors is significantly different from what might be expected—potentially indicating a cyberattack—the AI/ML system might automatically shut down, or switch to data from other types of sensors. There is both an art and a science to identifying patterns in the data that might suggest a cyberattack, and establishing the thresholds that will trigger the automated responses.
Another step is to make it more difficult for a cyberattack on one AI/ ML system on an unmanned vehicle to spread to other components of the vehicle—for example, from algorithms analyzing radar data to ones analyzing video feeds or signals intelligence. Here, the solution is to create a separate security boundary for each AI/ML system on the unmanned platform. This makes it possible to more tightly control the flow of data from one system to another, and to cut the connections between systems, if necessary, to keep a cyberattack from spreading.
Additional steps can help protect classified algorithms in the event an unmanned vehicle is captured by an adversary. Along with anti-tamper measures—which can make it difficult for an adversary to access and possibly reverse engineer a captured AI/ML system—the joint forces can apply an approach known as disaggregation.
An AI/ML system—one that analyzes radar data, for example—typically has a complex collection of mission algorithms. With disaggregation, no single UV in a mission has all the algorithms. Each does just a portion of the analysis and sends its piece of the puzzle to a central processing location. The goal is that even if adversaries can overcome the anti-tamper measures on a captured AI/ML system, they won’t be able to glean enough information to unlock the secrets of the system and its algorithms.
Protecting The Algorithms With Resiliency
If cyber protections do fail, the classified algorithms on an unmanned vehicle need to be replaced as quickly as possible with new and better algorithms to maintain the mission. However, with conventional approaches, algorithms can’t easily be switched in and out—often the entire AI/ML system has to be rearchitected, which can take months. In addition, algorithms and other components in a system are often so interdependent that fixing one problem—such as switching out an algorithm—can create other, unexpected problems in the system, leading to rework and more delays.
Once again, the modular approach provides an advantage. Using open architectures and other open techniques, the joint forces can build AI/ ML systems that make it possible to quickly plug-and-play new algorithms and other components. In addition to helping maintain the mission, this has other benefits. AI/ML developers can regularly tweak the classified algorithms and replace them proactively—before any cyberattack—to make it difficult for adversaries to build up information on them. Plug-and-play also makes repurposing classified algorithms from one mission to the next easier and more secure.
Protecting classified algorithms on unmanned systems in the Pacific presents its own set of challenges. But by constructing strong cyber defenses throughout the algorithms’ entire lifecycle, and by emphasizing resil- iency, the joint forces can take steps to meet those challenges.
Jandria Alexander ([email protected]) is a nationally recognized cyberse- curity expert and a vice president at Booz Allen who leads the firm’s business for NAVSEA and S&T, including unmanned systems, resilient platform and weapon systems, data science, and enterprise digital transformation strategy and solutions for Navy clients.
Mike Morgan ([email protected]) is a principal at Booz Allen who leads the firm’s NAVAIR line of business. He has over 20 years of experience supporting NAVAIR programs with a focus on systems development and cybersecurity for unmanned systems and C4ISR solutions.
Boozallen.Com/ Defense