The world economic downturn seems to be stopping plans by chip manufacturers to move to the next stage of chip design. This development is likely to have considerable consequences for the defense world, because it may herald the end of Moore's Law. Formulated by computer industry executive Gordon E. Moore in 1965, Moore's Law predicted that the number of components the industry would be able to place on a computer chip would double every year (in 1975, he updated his prediction to once every two years), implying that the cost of a unit amount of computing power would halve during that same period. That is usually taken to mean that the power of a typical chip will double in that span. Some claim that the rate of advance is now actually much faster. Usually it is assumed that Moore's Law will continue until the computer industry hits a physical barrier; a typical prediction is that it has 30 years to run.
Moore's Law is why the military currently runs on commercial or commercial-derived computers. The military procurement cycle is several years (often, more than a decade). The commercial cycle is closer to the computer development cycle envisaged in Moore's Law. It is unlikely that the technology available at the outset of a military program will be current by the time the device is fielded. Attempts to devise Mil-Spec computers with open architectures, so that commercial components could easily be inserted, have not been particularly successful.
Part of the problem is that devices fully meeting military specifications must be hardened in various ways. Merely testing them to make sure that they meet requirements—and that they are entirely reliable—may take longer than the commercial computer cycle. Higher speed generally means more complex chips, which in turn are more difficult to test, because they have more complicated ways to fail. The successful U.S. Navy solution has been to develop open systems into which successive generations of commercial computer servers can be inserted. This idea was first demonstrated in the Acoustic Rapid COTS Insertion (ARCI) program for submarines.
The issue of testing could not be ducked. The solution was to separate the submarines' systems into those that required rigorous testing (because they could fire weapons) and those which exploited the vessel's sensors to create tactical pictures—but that could not in themselves cause weapons to be fired. It turned out that fire control itself entailed a relatively small computing load, and thus did not involve computer development the way that picture-creation did. The ARCI idea has since been applied to surface ships and, to a lesser degree, to aircraft. The point is that the commercial world can benefit from rapid computer chip development partly because most of its applications can tolerate computer failures that would be fatal in the operational military world.
The reality of chip design is that it takes more and more elaborate measures to compress more and more elements into chips of roughly constant size. Each measure—each manufacturing technology—lasts a few Moore's Law cycles. Once it has been exhausted, the chip makers must invest in a new technology, with a considerable up-front cost. That investment is worthwhile only if the market is willing to pay enough for a new chip. For some time there have been signs suggesting that such investments are becoming less attractive. If the main market driving chip manufacture is personal computers, it seems significant that the price of such machines has been falling steadily, i.e., the profit margin in chips is being squeezed out. It is possible that other markets drive high-end chips: engineering workstations, for example, or even the computers that investment companies use. Both markets are now in deep trouble.
Nothing, not even chip performance, can double forever. Technologies generally follow S-shaped curves, starting slowly, then leaping ahead before their curves of improvement flatten out (although they never quite stop). The S-shape is easy to explain if the development is governed entirely by available technology. At first it is difficult to wring performance out of the technology, so improvement is slow. Next the technology is almost fully understood, and it can be exploited for rapid improvement. Then the potential of the technology is nearly exhausted, and improvement is slow, as it takes more and more effort to obtain anything significant. Moore's Law approximates the beginning and middle stages of an S, but it ignores the inevitable flattening out. Those predicting the future of computing tend to act as though anything that is technologically possible will be realized. They look around them and see a torrent of applications of computing, nearly all of them unimagined a few years ago.
Economics Matter
What is ignored is the role of economics. For example, it is absolutely true that a chip ten times as fast as current chips would offer all sorts of dramatic improvements in personal electronics but it would be viable only if the devices involved could be produced at an attractive price. Historically, the flattening of the S—the end of rapid improvement—has often come from economics rather than from some technological problem. For example, airliner speeds undoubtedly followed an S-shaped curve. If they had followed something more like Moore's Law, we would now be flying hypersonic transports, and it would be easy to get from New York to, say, Tokyo is an hour or so. In the real world, the supersonic airliners never reached critical economic mass (Concorde was heavily subsidized, and was prematurely retired because it was so expensive to fly). The supersonic transports died of economics. The proposed U.S. SST could not enter service without a heavy government subsidy, and for various reasons that was not forthcoming.
What happens now? Military operations require more and more computer power; many developments now marginally possible are considered viable because the necessary computer power should soon become available thanks to Moore's Law. In a very different context, the Polaris missile was considered a viable proposition in 1957 because, as Dr. Edward Teller famously asked, "Why not plan for a 1965 warhead?" That meant a much lighter warhead than 1957 technology could provide, which could give a small missile enough punch to be more than worthwhile. It was a kind of Moore's Law, applied to warhead weight rather than to computing power. It happened that in 1957-65 enough money was available to push warhead technology ahead, and there was no insuperable technological barrier to the "1965 warhead." Had warheads hit a barrier, had there not been a "1965 warhead" for Polaris, the undersea deterrent would have had much less impact. It happened that warhead technology kept running ahead, so that Polaris could develop into Poseidon and then into Trident, with immense strategic consequences.
The most obvious application of computer power is to sensors. For example, a sensor trying to deal with a stealthy target, or with camouflage, uses computing power to make up for reduced target signature (there is no such thing as an invisible target). Given enough computing power, the barely visible target can be seen against its surroundings. If Moore's Law continues, then a system built around one or a very few chips can be effective. It will become more and more difficult to hide from missiles and small unmanned vehicles.
Even if Moore's Law stops short, it will be possible to obtain more computing power by ganging chips together. In that case it may take a physically massive device to counter a given level of stealth—something in a data fusion center, perhaps, or on board a large airplane or ship. In that case only a netted system of sensors connected to data fusion centers will be effective against stealthy targets; the centers will indicate the revealed positions of the targets to whatever attacks them. Those who set up netted systems of sensors, fusion centers, and weapons will enjoy considerable advantages over those relying on decentralized weapons (with their own sensors) of the types now common. Conversely, anything that breaks up the coordination of sensors, centers, and weapons will make stealth more useful. Successful coordination will entail communication and precision location, since the fusion center will make attacks possible by indicating the position of the stealthy target to attackers otherwise unable to find it. This is an ultimate form of the network-centric warfare now being developed in the United States. Thus network-centric warfare, which is now often presented as a more economical or tactically more attractive alternative to previous modes of warfare, may become inescapable in a world of stealth without Moore's Law.
It is also possible that the era of Mil-Spec will return. If the problem is dollars rather than technology, it may be time for the military to pay to keep Moore's Law running, for its own applications. We have not been doing so for some time, and it must be an open question whether we are willing to pay the full price of next-generation computing. ARCI and its ilk were attractive because the computer hardware involved was inexpensive; the money went to innovative software that exploited the commercial hardware. In a Mil-Spec world, particularly one in which the cost of each new generation of computing is high, the choice will be between platforms, weapons, and computers. We are not used to this kind of trade-off, but it may be coming.