- In 2015, I was project officer for the X-47B Unmanned Combat Air System (UCAS), a test-and-demonstration program developed to explore using autonomy in some of the most difficult tasks in aviation, including carrier landings and aerial refueling. I joined the test team shortly after it made history by achieving the first autonomous carrier landing and helped lead testing on subsequent rounds of carrier operations, expanding the envelope during more challenging conditions on the ship. I was also one of the two mission operators “flying” the air vehicle (AV) when the team achieved the first autonomous aerial refueling.
As mission operators, we simply sent commands to the AV from our control room and monitored execution. For instance, once positioned on the runway, we sent the “take off” command, and the AV autonomously increased thrust, released the brakes, tracked down the centerline of the runway, lifted off after reaching flyaway airspeed, raised the landing gear, and flew along a precise sequence of waypoints in a programed departure route.
Many months into the test program and after dozens of successful test flights, the test team completed routine preflight checks as the AV sat in position just short of the runway. With checks complete, we sent the command to “taxi into position” on the runway, but the AV did not respond. Thinking we entered the wrong command, we sent the command again, but still no response. Our monitors confirmed that the AV had released brakes and added power as usual, but the jet did not move. The ground safety crew visually inspected the aircraft but saw nothing unusual. We attempted the command three more times and still no movement. Our test team—the best and brightest industry engineers and Navy test professionals—was flummoxed. We scrubbed the test, shut down the AV, and towed it back to the hangar.
Many hours later, the ground safety crew burst into the control room and announced an incredible discovery—divots on the concrete taxiway where the X-47 had been positioned. Astoundingly, after months of testing and parking in the exact same position short of the runway, the weight of the AV had created divots in the taxiway, and the divots had become so deep that on this test flight the standard amount of thrust required to start taxiing was not enough to drive the aircraft out of those divots.
This experience provided me profound insights into autonomy. First, that the X-47 repeatedly parked within millimeters of the same spot for preflight checks over many months illustrates the high degree of previously unfathomable precision in autonomous systems. I must have launched more than 100 times from that runway in an F/A-18 and certainly never stopped on the same patch of taxiway twice. Even during dynamic and challenging conditions on the aircraft carrier—the ship steaming at 25 knots in near-limit crosswinds that would be a challenge for the even best aviators—the AV touched down on the carrier deck in precisely the same spot time after time, without fail.
More important, the experience revealed an attribute of autonomous systems that all warfighters should understand: autonomous systems lack the intuition to understand the data they process like a human would. When stuck in the divot, the AV released brakes and added the programmed amount of power to start taxiing, but nothing happened, and thus the AV was out of options. There was no intuition to understand what might be preventing movement or to command a little extra power to taxi out of the divot. I might never have parked in the same patch twice as a pilot, but I would have never let a little bump in the road keep me from flying. In this case, the incident cost a single day of testing. In combat, this lack of adaptability could have had much more dire consequences.
Even when systems can integrate a sophisticated level of artificial intelligence to learn from experience and add layers of decision-making and problem solving, warfighters will need to understand the inputs, underlying logic, and potential faults to effectively employ autonomous systems at the speed of combat.
The Autonomous Potential
Autonomous systems can provide remarkable capabilities to warfighters, making them more lethal, more precise, and safer on the battlefield. They enable operators to focus less on predictable, mundane tasks and more on building situational awareness, quickly analyzing the flood of sensor data, and making faster, better-informed decisions.
In the digital age, every warfighter is a battlefield manager. Information management and battlespace awareness are paramount, which makes the efficient allocation of attention more critical than ever before. As such, warfighters must be trained to understand the logic, capabilities, and limitations of autonomous systems. The better they understand their weapon systems, the more lethal they are. Operators must understand these systems to effectively employ them and troubleshoot them when the inevitable interruptions happen. In addition, as autonomous modes can quickly become a crutch, failure modes must be mitigated, or human capabilities must be maintained no matter how well autonomous systems master them.
Human-machine teaming enables commanders and operators to leverage autonomous systems while maintaining the human judgment, intuition, and moral and ethical reasoning that the nation relies on from its warfighters. To ensure optimal and consistent employment, every warfighter must understand the logic, capabilities, and limitations of the autonomous systems they employ.