It is often said that software coders have too little sense of the physical world—in particular, how physical systems need time to move or to correct themselves. As software gains more and more autonomy in unmanned systems, this lack of awareness has the potential to create problems.
Some decades ago, for example, the British were testing a computer-controlled diesel-electric submarine. One standard submarine maneuver is a crash reverse: When a sub collides with something, its first response is to reverse its propeller. But the process takes time because of the considerable momentum involved, and the electric motor turning the shaft will briefly overload as it tries to drive the propeller against the motion of the boat. That is not really a problem, because the motor can handle it as long as the overload does not last very long.
For this particular “swim-by-wire” British sub, software was supposed to cut off the motor automatically if it overloaded, but the coders did not understand that some overloads were intentional. That nearly proved fatal in this case, because the crash-reverse test was conducted while the submarine was diving. Without the reversed propeller, the submarine simply kept moving down. It was saved only by blowing ballast tanks. The programmers also had not understood that a physical system has inertia.
In general, an airplane or any other computer-controlled device will occasionally find itself in a situation coders could not or did not anticipate, as Boeing’s problems with the 737 MAX show. All recent 737s have fly-by-wire systems, as do virtually all other modern commercial and military aircraft. That is, there are no longer direct mechanical or hydraulic links between the pilot and the control surfaces; the pilot’s control inputs are interpreted by a computer, and resulting instructions are sent digitally to server motors that manipulate each surface. In many such systems, the computer also makes many small adjustments without pilot input, making it difficult if not impossible for the pilot to disconnect the computer and maintain controlled flight. (For example, many military aircraft—the F-16 and B-2 bombers perhaps most famously—are aerodynamically unstable; without thousands of continual, tiny control surface adjustments directed by their computers, the planes would find many ordinary maneuvers fatal.)
Fly-by-wire generally makes civil air transport safer—the same argument made for driverless cars. In military aircraft, fly-by-wire improves survivability, since there can be multiple paths between pilot and computer and between computer and control surfaces. But critics argue that as pilots have less and less control, many seem to have lost their edge, especially their instinctive reactions to what an airplane does.
So, what went wrong for Boeing? The 737 family is the most popular jet airliner in history, but the 737 MAX is different from its predecessors. It has new, more efficient engines and its construction includes many parts made from fiber-reinforced plastics—composites— to lighten the aircraft and improve fuel efficiency. These and other changes altered the aircraft’s weight and balance as well as its handling characteristics, giving it a tendency for the nose to pitch up in some cases. But the FAA decided that the changes were not sufficient to require pilots to earn a new type rating—anyone qualified on the 737-800 would be allowed to fly the 737 MAX-8, for example, without additional training. But the software included new features that pilots need to learn. One in particular counteracts the stalls that could occur thanks to that tendency to pitch up.
Airplanes stall when the air flow over the wings breaks up, which happens when the angle between the wing and the airflow—the “angle of attack”—is too great. Pushing the nose down and increasing power is the usual response. A variety of systems warn pilots of an impending stall, from “stick shakers” (that give haptic feedback through the control stick or yoke), to horn sounds, to digital voices and on-screen alerts—sometimes all at once.
Boeing decided with the MAX to let the aircraft initiate stall recovery, allowing the software to push the nose down when a stall begins, possibly even before a pilot is aware of the situation. But some accounts of the recent crashes suggest the system might have created a feedback loop, with the aircraft overcorrecting down then up, again and again, and the pilots fighting the computer for control until the moments the planes crashed.
Improvements both to the 737’s software and the training of pilots in how the software works should allow pilots to overcome such issues in the future. But what happens to an unmanned aircraft, a UAV, when there is no pilot on board to overcome by experience what the software never anticipated?
The U.S. Army tends to allow its UAVs to fly virtually autonomously, monitored by an operator responsible for several UAVs at once. That also seems to be Navy practice. Both services essentially command their UAVs to go to a desired position then let remote personnel operate the sensor packages or weapons. The Air Force, on the other hand, prefers to have pilots actively fly its UAVs from simulated cockpits. It also has a higher crash rate than the other services, but it is not clear if there is a causal relationship between the human pilots and the mishap rate.
In all the services, though, emergency control is exercised through a radio command link. At long range, that means satellites, with a potentially meaningful lag between operator and distant UAV; digital and satellite-mediated phone calls often involve noticeable lags, because the signals involved move at high but finite speed. For high-performance UAVs, even small lags could be significant when reaction times to dangers must be in the millisecond (or smaller) range.
Worse, soon even high-altitude communication satellites may face threat of physical attack, for example by high- energy lasers. Their signals already are vulnerable, and the problem is growing worse. (The Air Force reports that interrupted communication may have been a factor in many of its UAV crashes.) This will put more pressure on software autonomy, unless controllers can be moved close enough to the UAVs they monitor that the aircraft can be controlled without resort to satellites.
Automated UAV activity also depends on automated navigation—GPS—and, again, system vulnerability is increasing. Some have assumed the precision navigation and timing (PNT) system would not be attacked because the attackers would depend on GPS, too. That may well change, as China deploys its own PNT system (not to mention Russia’s GLONASS PNT system). And though U.S. military systems are intended to be hardened against GPS jamming, jammers have been available for years. Alternatives exist for both communication and navigation satellites—high-flying, long-endurance UAVs, for example, or even balloons. (See, “If It Floats, It Fights,” pp. 44–47, June 2018.)
The future for unmanned surface and underwater vehicles will depend on all these issues as well. In many ways, the ocean environment is more demanding than the atmosphere. Software writers will be under more pressure than ever to understand the physical environments and the unusual operations that emergencies can require. Communication links may not survive electronic or physical battle—and an autonomous submarine would by its nature not be able to maintain continuous contact even during ordinary operations. The design teams and coders will need to include more—and more-experienced—operators of the manned systems to ensure the software for unmanned ones knows what it is doing when it is left to itself by design—or by malicious activity.
The programmers have their work cut out for them.