Control has to be more than applying motors to existing input devices such as steering-wheel columns.
The automobile computer can, of course, do much more. It commonly controls fuel injection and engine ignition, the main benefit being much cleaner and more economical running. In a hybrid car such as Toyota's Prius, there is a subtle relationship between the engine and the battery. For example, as the car stops, energy from braking is recovered and stored by the battery. At times the engine is turned off and the car runs electrically. At higher speeds the engine takes over in a manner that the computer decides is most economical. This hybrid operation nearly demands that the car's brakes are computer-linked to the battery. Perhaps the most efficient hybrid operation requires individual motors to power each wheel, which can be switched to resist motion for braking. Nothing but a computer can calculate exactly how that should be done.
The same network makes it possible for sensors throughout the car to monitor its other functions automatically. These may measure engine temperature, tire pressure, or oil level, and report problems on the dashboard. They may also, however, sense that the car is about to skid, and order the computer to handle brakes accordingly.
In most cars, automation of this sort is limited to engine control. This is usually discovered when it is no longer possible for anyone without a computerized shop to tune the engine. Toyota's problem shows both the potential and dangers inherent in pushing this technology. Similar issues arise for fly-by-wire aircraft and for the widely discussed swim-by-wire submarine. They are just easier to see in a car.
The issue is one of reliability. What counts is the combination of computer hardware (usually with multiple interacting computers) and the software it runs. How well does the software always react as desired to a wide variety of circumstances ? To some extent, this translates into asking how well the programmer(s) who prepared the software understood the issue.
Software usually works, but the horror stories are frightening. For example, on her trials the British submarine Upholder, with computer-controlled machinery, almost sank. A standard test is a crash stop and reverse. This normally overloads the electric motor propelling the submarine, but only briefly and with acceptable loads. To protect the motor, the programmer made it cut out if it was too badly overloaded. The submarine was at a down angle when the crash reverse was ordered. Normally the motor would simply have pulled the submarine back up. In this case, the computer cut the motor. The submarine survived, and the program was rewritten. That was not a difficult fix because the program was relatively simple, and it did not interact with a host of other computers. The more computers and sensors involved, the more complex the situation, because whatever has been programmed has to handle many more distinct and unexpected situations.
Computer chips make the situation more interesting. Chips evolve very rapidly, generally on a 9- or 18-month cycle. A car model with a production run measured in years is likely to have several different chips, or different versions of what is nominally the same chip, installed across its life span. The U.S. military encounters exactly this situation as it uses commercial chips almost exclusively. One major driver in the chip business is the intense pressure to cut costs. Chips are constantly being redesigned, even when they are not redesignated. Redesign is done on what is called the nano level, inside the chip. The chip has an internal translator mediating between the nano language in which it operates and the micro language in which it is programmed.
Unhappy U.S. military users learned some time ago that it was by no means obvious that two chips with the same designation performed identically. Worse, the time scale for confirming that a new batch of chips behaved as expected sometimes exceeded the time cycle between chip designs. One might speculate that the more intense the pressure to cut chip cost, the better the chance that design errors will slip in. Such errors may not be evident, because the programs running on the computer may not display them. Unfortunately, these programs also may run directly into such flaws.
Does Not Compute
It is literally impossible to simulate the millions of possible situations that may confront a given computer program; conversely, one that performs perfectly for days or months or years may suddenly fail. Most computer users have had this experience. After a computer crash, the primary solution is a system reboot. That is rather difficult in a car traveling at 60 miles per hour. The automobile situation is particularly difficult because there are so many of them. You might imagine that a system, which suffered only one failure every 100 million hours, was very reliable. However, in a year the average car probably runs for 200 hours or more. Imagine there are a million cars on the road using a particular combination of software and hardware. That is 200 million car-hours - and on average that extraordinarily reliable system would suffer two accidents a year.
Most computer users are aware that programmers are human, and that complicated programs contain a percentage of errors. Often patches in big programs merely isolate chunks of bad software. Such problems confronted those who advocated fly-by-wire, in which some errors were literally fatal. Yet fly-by-wire was inescapable for airplanes that were highly maneuverable because they were inherently unstable (like the F-16) or were so unaerodynamic they could not be flown without computer intervention (like the F-117).
The solution has largely been twofold. Aircraft generally use multiple computers, which constantly check each other. They may fail together, but the chances are that they will not make exactly the same random errors at the same time. The alternative is manual override, which the operator can activate in the event of a system failure. Some current drive-by-wire cars incorporate such devices. Toyota's developers may have had such faith in their systems that they saw no point in overrides. Some recent testimony suggests that their software was easier to circumvent (e.g., to drive dangerously) than that of other manufacturers.
The problem may have been cultural. Cars are mechanical. Automotive engineers do not normally evaluate computer systems. In Toyota's case they may have been too ready to believe what those from a completely different realm of engineering told them. That would be an eerie echo of the recent (or perhaps continuing) economic mess. One persuasive explanation of the financial crack-up is that the markets were taken over by those who spoke a radically different, mathematically sophisticated, language. How could the usual stock market operator evaluate a statement that a particular problem could happen only once in a billion years ? He would understand the words perfectly well, but probably not the mathematical assumptions behind them. It is not hard to imagine a Toyota engineer, extremely experienced in car design, being confident that a program would not fail more than once in a million hours, without any ability to evaluate the accuracy of the statement. Few in the military will fail to recognize much the same problem in their everyday lives.
The bottom line is not that computer-controlled systems are evil and ought to be killed. They offer too much that nothing else can provide. However, they also offer new modes of failure, which we ignore at our peril. It has been far too easy for those of us who lack an instinctive understanding of the subtleties of both computers and exotic mathematics to swallow statements whole, whether they are about the safety of the stock market, or the reliability of a car (or airplane or missile), or about the reliability of the data on a combat system screen.