A group of five unmanned surface vehicles in the South China Sea spots a contingent of enemy vessels, but can’t get that information back to operators— it’s a contested environment, and satellite communications in the area are jammed. The UVs, working together, determine that one of them needs to leave the area to send a message back.
They decide among themselves which of the five should go, based on which has the best information and the best chance of sending the message without being detected. The chosen UV leaves the area, and figures out for itself when conditions are right to send the message, and the safest, most efficient way of sending it.
The artificial intelligence that can provide UVs with these and other advanced autonomous capabilities
will soon be available. But there’s a problem. Such sophisticated AI requires computers that are too big, and require too much power, to fit on UVs.
What the AI needs is a way to lighten its workload, so that onboard computers can be smaller and use less power. And two new approaches are now able to do that, by making computers—and the AI itself—mimic how the brain operates.
One approach is an emerging new design for computers, allowing them to process and store information the same location—similar to the way the brain does—rather than in two different locations. With the second new approach, the AI reaches conclusions with less data, through inference—comparable to how we can identify an object even if we have only a partial view of it, by filling in the blanks.
Currently, small, low-power computers on UVs can only support “narrow AI”—good for a few basic activities, such as surveillance and reconnaissance. But with the two “brain-inspired” approaches, even highly sophisticated AI can run on the smaller computers. This makes it technologically feasible for the joint forces to bring high-level autonomy to unmanned surface, undersea, and aerial vehicles in the Indo-Pacific
Beyond Narrow AI
With narrow AI, unmanned vehicles are not intelligent enough to act autonomously in a number of important ways. For example, they can’t independently determine whether something they’ve spotted is important enough to alert an operator—currently, UVs check in at scheduled times. They don’t always know how to use their fuel efficiently when tracking contacts, or how to conduct ISR without being detected. They typically can’t autonomously distinguish between combatants and non-combatants, and don’t know how to apply rules of engagement. They have only limited situational awareness.
UVs theoretically could tap into sophisticated AI by connecting to the cloud—but that’s not a workable option. UVs can’t count on satellite communications in a contested environment. And power and bandwidth constraints would limit back-and-forth with the cloud, even in peacetime. So, the AI has to be able to run onboard.
Mimicking The Brain
The two brain-inspired approaches don’t make the AI smarter—AI is already gaining the ability to provide many aspects of advanced autonomy. What the approaches do is simply make it possible for the AI to run on the small, low-power edge computers that UVs have to rely on.
One of the approaches actually changes how computers work. Today’s computers have separate processing cores and memory cores. This means that for each computation, the processor reaches into memory, takes out the data it needs, and then brings it back to process it. That continuous back-and-forth makes for a heavy workload—particularly for AI that does billions of computations a second. While the back-and-forth may not be a problem on large, powerful computers—such as those on conventional ships—it can quickly overwhelm a UV’s edge computer.
Our brains operate in a different way. We’re able to hold much of our memory in the same place that we process information, which allows even our most complex thinking to be almost instantaneous—a definite evolutionary advantage. Mimicking the brain’s design, AI researchers are developing computers that put processing and memory in the same place. This makes the workload of even sophisticated AI manageable on a UV computer.
Training AI To Use Interference
Another way to reduce the workload is to simply use less data. AI researchers are achieving this by mimicking how the brain uses inference to make sense of the world with limited information. For example, when we’re driving, we can anticipate the actions of other drivers by subtle cues, such as a car speeding up before changing lanes, or a car edging to the left as it approaches an intersection, in advance of actually making a right turn. We’ve seen these scenarios so many times that we don’t need any additional information to adjust our driving. Our ability to make inferences and predictions from just a few cues is one reason why we can (usually) drive safely on auto-pilot, our thoughts elsewhere.
By training AI to infer from a few cues, researchers are greatly reducing the amount of data—and power—the AI needs. For example, the AI might be provided with the “pattern of life” of an adversary’s vessels in a particular area. If the UV’s sensors pick up an anomaly—such as a vessel that’s in an unexpected location, or is behaving in an unusual way—those may be cues that will enable the AI to infer the vessel’s intention. The AI doesn’t have to piece together every detail about the vessel, or sort through every potential action it might take. By picking out only the relevant cues, the AI could reach its conclusions with just a small fraction of possible computations—making it workable on a small edge UV computer. And the AI would be just as accurate as AI running a large, powerful computer on a destroyer.
Training AI to use inference is both an art and a science. The ability to select the right cues, and fully understand their implications, requires extremely deep domain and mission knowledge. At the same time, AI experts need know how to apply that knowledge to achieve autonomy.
If unmanned vehicles in the Indo-Pacific and elsewhere are to gain the level of autonomy required by the joint forces, AI-enabled edge computing needs to be rethought. The human brain can provide the inspiration.
Ayodeji Coker ([email protected]) is a former senior leader at the Office of Naval Research who is now an executive advisor at Booz Allen, where he leads intelligent autonomous systems strategic initiatives for the Navy. His roles at ONR included portfolio manager for autonomy, and leader of the Navy’s intelligent autonomous systems strategy.
Jandria Alexander ([email protected]) is a vice president at Booz Allen who leads the firm’s business for NAVSEA and S&T, including unmanned systems, resilient platform and weapon systems, data science, and enterprise digital transformation strategy and solutions for Navy clients.