War has broken out in the Pacific, and our adversaries are using everything in their arsenal to disrupt our satellite communications and surveillance— to strike us blind. They’re trying to jam the signals between our satellites and ground stations. They’re trying to hijack the satellites, by sending commands that seem to be coming from our ground stations, but are actually coming from their own. They’re aiming missiles and lasers at the satellites, and even using their own satellites to take ours out of commission.
Ideally, our satellites would be able to think for themselves, so they could detect and defend against such attacks almost instantly, without waiting for operators at ground stations to analyze the threats and then determine possible courses of action. Having such a “brain” on board each satellite would be particularly valuable in the coming years, when there may be mesh networks of thousands of small DoD satellites— far too many for ground stations to fully monitor.
Defense organizations may soon have the ability to equip their satellites with this level of intelligence. Large language models, a form of generative AI, can develop a contextual understanding of a situation and—mimicking the human brain— make sophisticated inferences and suggest a complex set of actions.
ONBOARD INTELLIGENCE
For example, based on an awareness that war has broken out, or may be about to, a large language model might infer that certain seemingly innocent radio signals actually indicate a probable attack. The model might then execute the defensive measures it has determined have the highest probability of success, taking into consideration not just the adversary’s capabilities, but also how other satellites in the network are currently faring against similar attacks.
And it would do all this without needing to rely on ground stations to detect and analyze the signals, recognize the threat, and then work out how best to respond. It is important to note that any actions suggested by large language models would be constrained by humans through guardrails, based on mission context.
Attacks on satellites—whether by cyber, missile, laser or an enemy satellite—can happen so quickly that instructions from ground stations may not arrive in time. A satellite with a large language model doesn’t have to wait for instructions from a cybersecurity expert on the ground, for example. The large language model on the satellite is the cybersecurity expert.
In a sense, a large language model would be like having a team of human operators on each satellite, performing a number of specialized actions at once—such as analyzing data on an attack, formulating a response, and communicating with other satellites in the network.
And a large language model’s response to an attack can be highly sophisticated. For example, if an adversary fires a ground-based missile at a satellite, the model on the satellite might quickly figure out how to outmaneuver it.
Or, a model might recognize that an enemy satellite is moving into a position that suggests it is about to attack. The model could then deter- mine the best defensive measures— even anticipating how the enemy satellite might respond to those actions, and plotting out moves to outwit it,
like a chess game.
SOPHISTICATED COLLABORATION
With mesh networks, satellites connect with each other through an “internet in space,” and can communicate even if signals from the ground are disrupted. It’s similar to the way Uber works. Each Uber driver serves as a node in a network, providing information to help create a common operating picture. And what one satellite sees, they all see.
If a satellite in the network were attacked, its large language model could not only determine the best defense, it could pass that information along to all of the other satellites. For example, say an adversary jams the ground signals going to a group of satellites. Large language models on those satellites might detect the attack and quickly switch communications to different frequencies, with each model choosing the frequency it predicts will work best.
If a satellite finds a successful frequency, it can communicate that to the others in the immediate group under attack—as well as to the thou- sands of other satellites in the network. If one of the other satellites picks a bad frequency, and is cut off from the ground, it can communicate that to the group as well. The large language models in a mesh network combine what they’ve learned to figure out what works and what doesn’t, as teams of human operators would. With each attack, the network of large language models get smarter about defense.
Just as important, the large language models in the mesh network would work together for the greater good— that is, taking defensive actions not just to protect themselves, but to make sure the satellite constellation as a whole is doing what it needs to do. This might even mean that some satellites would sacrifice themselves— moving into the path of incoming missiles, for example—to protect the larger network.
AWARENESS OF CONTEXT
One of the strengths of large language models, compared to conventional AI, is that they have a much greater ability to understand context. Say, for example, a model learns that the network is under attack from an adversary, and then gets commands from the ground that don’t reflect the conflict, such as an order to observe a region far from the war zone. The model might then take steps to determine whether it is being hacked—it could, for example, query other satellites about whether they are getting the same commands. It might also alert operators at the ground station of the possibility of an insider threat.
Having hundreds or thousands of large language models in a network would help make sure any single model stays accurate and on task. If a model went rogue, so to speak, or was compromised by an adversary, the other models in the network would likely recognize that it was deviating from the group—and possibly quarantine it. They might designate another satellite to take over its role, and perhaps recommend that ground control shut it down.
It would be no more difficult or expensive to equip satellites with large language models than with conventional forms of AI. Large language models are offering new opportunities for defense organizations in a variety of applications—including in protecting satellite communications and surveil- lance from crippling attacks.
LT. GEN TREY OBERING ([email protected]) is a Senior Executive Advisor at Booz Allen, specializing in space and missile defense. He is the former Director of the Missile Defense Agency.
COLLIN PARAN ([email protected]) is an AI architect at Booz Allen who builds large language models for a variety of applications for the Space Force, Navy, Army and Air Force.
Booz Allen subject-matter experts Evan Montgomery-Recht, Timothy Snipes and Karis Courey contributed to this article.