Shortly after his appointment as Secretary of Defense in 1961, Robert S. McNamara asked for staff studies on various questions, and he requested responses with accelerated deadlines never before experienced in the history of the Pentagon. These requests were soon followed by the establishment of his new programming system—the Five Year Force Structure and Financial Plan—which permitted an analysis of requirements based on mission and the accompanying weapon system. Unhappily, this sound programming system—was and is a substantial administrative burden. Moreover, it is viewed by many as a duplication of the Congressional budget structure and a dangerous step toward the centralization of power.
The advent of Secretary McNamara brought still another first, cost-effectiveness—- known also as operations research, operations analysis, economic choice and quantitative analysis—the evaluation of all facets of a weapon system from its inception until its demise.
The discipline of economic choice has been in existence for some time. Having roots both in mathematics and economics one could say that cost-effectiveness theoretically began with the Chaldeans or Babylonians of Alexander’s time. But, practically speaking, the embodiment of this technique occurred during World War II in resource-apportionment work begun in the United Kingdom. The scarcity of British capital to meet skyrocketing military requirements in the struggle with Nazi Germany necessitated an optimum apportionment of available funds. In the interim—from Alexander to Hitler— many individuals and disciplines contributed to the science of operations research.
The U. S. military (as well as its industrial contemporaries) accepted cost-effectiveness most reluctantly. It was unheard of in the Navy of 1941, for example, for a mathematician or an engineer to attempt to represent naval operations mathematically, let alone analyze or evaluate them. Nevertheless, in May 1942, the Navy established the Anti- Submarine Warfare Operations Group (ASWORG), the predecessor to today’s Operations Evaluation Group, of the Center for Naval Analyses. ASWORG’s primary mission— as one can readily see from the name—was to solve some of the tactical antisubmarine warfare problems of the day. Following World War II, this operations research group became the Operations Evaluation Group (OEG), which expanded its efforts to include the investigation of historical naval operational problems as well as the development of improved tactics and weapons effectiveness.
In 1956, the direction of the study effort was broadened. While still retaining the direct association with Fleet operations and readiness in the Office of the Chief of Naval Operations (OP-03) as OEG (OP-03EG) the group provided expanded and direct support for the Long Range Objective Group (OP-93). None of these groups, however, made analyses of the cost implications of individual strategies or weapons or tactics. Thus it can be seen that, although for several years the Office of the Chief of Naval Operations has been using operational analysis as a means for evaluating feasibility and investigating results of fleet exercises, these efforts were directed largely to the effectiveness area. The analyses were made to determine the types and mix of future weapons systems, the capabilities of these weapons systems as well as the logistic support needed for the naval forces of the future.
It was not until Secretary McNamara and the Defense Department Comptroller, Charles Hitch, from the Air Force “operations research factory”—The RAND Corporation— came on the scene that cost-effectiveness, or economic choice, became the yardstick for weapons selection and strategic decision-making. Yet, despite the urgings of Messrs. McNamara, Hitch, et al., the Navy was the last of the services to accept this technique and use it in procurement planning.
Why was the Navy so reluctant to accept the concept of mathematical analysis as a means of procurement decision-making? The answer seems to lie in the fact that the line naval officer has been educated and nurtured to make reasoned—yet relatively swift—decisions based on the existent and obvious factors that impinge on a subject. Over the years, line officers (as well as others) have not been required to make mathematical analyses to solve problems or make judgments. For example, when a ship commander is called to his bridge in the middle of a moonless night to make a decision on how to avoid another vessel in extremis, his decision is instantaneous. This empirical thought process—although not necessarily nor always applicable in the business world—was extended to that area when the line officer was assigned to the Pentagon.
It was difficult, then, for the Navy to acquire this new discipline. The turning point came—through pressure from above more than acquiescence from within—when the Secretary of Defense directed a study of the sea-based air strike forces. Cost-effectiveness became a reality in the Navy.
The need for examining the Navy’s carrier force was not new. Attack carriers had come under close scrutiny many times in the preceding decade. Subjective analysis and “military judgment,” however, had been sufficient to silence the CVA’s critics. This time a “new” yardstick—cost-effectiveness—would be applied. The Navy members of the team assigned to study the CVA weapon system had little previous experience with this new technique. True, there were a few “technicians” from OEG (the Operations Evaluation Group)—the Navy’s counterpart of the RAND Corporation—but the majority of the team members were operationally oriented naval officers. To this mixture were added one or two members of Dr. Alain Enthoven’s weapons system analysis team (Office of the Assistant Secretary of Defense, Systems Analysis). After some six months of intensive effort the attack carrier study group reported its results.
Gratifying as the results were—the need for CVAs was confirmed—to this observer it seemed the study was fundamentally invalid. Is there, in fact, anything but an “apples and oranges” comparison between a carrier weapon system, a self-supporting entity, and an Air Force Composite Air Striking Force, that requires not only an airfield in a friendly nation but also prepositioned facilities for maintenance, operations, and other housekeeping details?
But the Navy learned its lessons well from this adventure. To justify financial requirements to the Secretary of Defense, cost- effectiveness must be ascertained. And it would seem to be prudent to employ members of Secretary McNamara’s own team on important studies in order to ensure the Defense Department’s approval of both the procedures and, at least inferentially, the results.
Following the attack carrier study, cost- effectiveness techniques were used to determine the conventional ordnance requirements of the Navy, with startling results to many inside the Navy as well as the planners and programmers of the Air Force and OSD. The planned conventional ordnance requirements were radically altered, so much so that the disbelieving Air Force analysts subsequently investigated the area completely themselves— with generally corroborating results.
Other uses were also made of this new technique. For some time after Secretary McNamara’s assumption of office—although not necessarily associated therewith—the Navy’s ability to support its inventory of new aircraft had become more and more suspect. The crux of the problem was the continuing need to fund the first year’s procurement of aeronautical spares—major components, such as engines—and repair parts—individual items needed to repair major components—as well as the first year’s procurement of a new weapons system prior to any actual experience being obtained. Understandably, Navy planners had not always been able to forecast accurately what spares or repair parts would fail, nor, obviously, could they predict their relative failure rates. The Navy’s critics had their day when the Navy advised OSD of the unanticipated need for more funds to procure spares and repair parts for high performance aircraft. It was suggested that the Navy reorganize, changing its management procedures in order to be able to predict more accurately what parts would fail. To force prompt action, the budgeteers reduced funds available for spares and spare (or repair) parts. In an attempt to solve this problem and obtain more accurate requirements of aeronautical spares and repair parts the Navy employed another tool of the quantitative analysis trade—“sampling.”
Sampling is neither very sophisticated nor very new—having been used for many years for industrial quality control and by the Bureau of Census to estimate the nation’s population. Many believe it to be more accurate than an actual item or head count. In practice, sampling involves the apportionment of individuals into homogeneous groupings—e.g., dividing people according to yearly wages earned, or by nationality of forebears, or by religion, etc. As modified by the Navy, sampling divided the aeronautical spares and spare parts into groups first, according to cost and, second, according to whether the items were repairable or consumable. From the more than 400,000 individual items, a homogeneous group of 2,000 was reviewed in depth, with the result that the Navy justified an increase of 11.6 million dollars in Fiscal Year 1965 and some 40 million dollars in 1966.
Significant progress has also been made in firming other support requirements. As part of the same effort, methods of forecasting the work load for the repair of repairable spares as well as producing more accurate spare parts allowance lists were made possible. It was (and, to some extent, still is) in this allowance list area that subjective “guesstimates” had been the rule and they had completely obfuscated real requirements.
At about the same time as the effort to improve the spare parts situation in the aviation arm occurred, the need for a follow-on, carrier-based, light attack plane was evidenced. A full-blown cost-effectiveness analysis of the various candidates was directed by OSD and the Navy Department went to work. Once again an amorphous mixture of operational experience, technical (design) expertise and cost-effectiveness measures was used to decide that the aircraft, designated A7A, would be built by Ling-Temco-Vought. Although the decision was clearly defensible—and for that reason the study effort important to OSD— this writer believes the real value of the analysis lies in less obvious areas. Quality control—where stricter requirements have been placed on the contractor—as well as more realistic contracting procedures, where a firm fixed price and firm target dates were negotiated, are two examples in which substantial ancillary benefit resulted from the study effort.
There are other programs where the technique of economic choice is being used— to justify the revamping or modernizing of weapons systems and to provide justification for Program Change Proposals (PCPs), the administrative means for changing the Five Year Force Structure and Financial Plan— in addition to the highly publicized closings of the naval yards.
The recent advent of operations analysis groups in the Navy Department is a sign of these analytical times, and the question therefore arises—how valuable is the cost- effectiveness technique? As in most controversies there is no pat answer. The technique is mathematically oriented, but figures can mislead. For example, cost estimates appear to be the more easily obtainable of the two sides of the problem—cost and effectiveness. The services have weapons and programs in their inventories comparable to the one they are asked to analyze from which a cost can be extrapolated. Yet, almost every single weapon system costs more than originally estimated. This is not unusual. It is difficult to forecast the cost of any new invention.
In measuring effectiveness, there are many hazy areas. For example, the accuracy of any kill-probability prediction becomes doubtful when one looks at the postwar analysis of the Eighth Air Force bombings of Germany where the damage (kill) probabilities were surely not consistent with actual damage. But there are other bad guesses that have caused more than red faces. Consider the F3H Demon which the Navy had to float down the Mississippi on barges because the engine designed for its use never developed the thrust it should have!
How does one measure the relative security of the FBM submarine against its location and destruction—by the number of sweeps or sweep width of a conventional ASW surface vessel? There are many other intangibles in this effectiveness area.
But some things can be counted and weighed —numbers of people, for instance. And perhaps the most significant weakness of operations analysis today is the shortage of high- quality analysts. With the increase in cost- effectiveness analyses in the Department of Defense and the widespread use of the same disciplines by industry, the demand for analysts has far exceeded supply. This situation has created a costly and often inadequate analysis effort in many activities both inside and outside the government, although OSD and the individual services are attempting to remedy this situation by increasing the number of service personnel being trained in these disciplines. The Navy Postgraduate School at Monterey, California, which has been teaching this technique and offering a Master’s Degree in Operations Analysis for over ten years, is being used by OSD to augment the number of operational analysts throughout all services by offering a cram course to interested personnel. The problem is considerable, but it should be alleviated when the young officers of the military realize the job opportunities existing in this field both in government and industry. Unfortunately, the demand of industry will take its toll and many young military analysts will be recruited by big business, thereby ameliorating the long term problem but slightly.
Let us look at another view of operations analysis. Although OSD demands these analyses in advance of expensive procurement programs, one must state that they provide another means of evaluation of a weapons system different from the normal service methods. An analysis is every bit as good as its inputs—the terminal accuracy of a missile, for example, the performance of an engine, the accuracy and reliability of the guidance system, the costs. Yet, looking at the other side of the coin, the danger of equating all these individual values to a single cost- effectiveness index is considerable. This area offers perhaps the most important limitation —other than the shortage of expert analysts—because judgment alone dictates the weighting factors to be used. For this reason one must be careful to view indices judiciously.
Quantitative analysis is designed for and should be synonymous with objectiveness, for one must acknowledge the objectivity of mathematical solutions. In analyzing the cost- effectiveness of a weapon system, however, one can slant an analysis through shrewd use of assumptions and specific measures of effectiveness (and the non-use of others) with the obvious result that objectivity is lost.
The burgeoning administrative cost of studies is also a cause for concern. The need for a single analysis is definitely indicated for a weapon system going into production in order to determine its relative merit and whether it offers a sufficient improvement over a competing system. But must there be duplicate effort at all management levels? At present, in the Navy there is a Weapon System Analysis Office responsive to the Commander, Ordnance Systems Command—and, of course, also responsive to the Chief of Naval Material. The Chief of Naval Operations has established an ever-increasing capability to analyze plans and programs from a quantitative point of view and augments this in-service group with analysts from the Center for Naval Analyses. On top of that, there is the group headed by Dr. Enthoven. There is also a group doing comparable work for the Department of Defense Research and Engineering Group under the Assistant Secretary of Defense for Research and Engineering, Dr. John S. Foster, Jr. The JCS has its Weapons Systems Evaluation Group. There is the Institute for Defense Analyses. Secretary McNamara talks of duplication of effort in the services but there is evidence that his own office is duplicating the work of the individual services.
The services have acknowledged the efficacy of this analytical technique—the existence of their analysis groups alone supports this statement. Admittedly, a skeletonized analytical team is needed in OSD—but not to verify every penny in the cost estimates, nor to second-guess each analysis. Rather this group should provide the Secretary of Defense with advice on the quality of an analysis by evaluating the use of proper measures of effectiveness, determining whether necessary inputs have been included, i.e., quality control in electronics performance, etc. Now that the Navy has accepted the usefulness of cost-effectiveness, it appears OSD personnel should limit their duties to the function of review only. This review should not be one in depth—requiring many man hours by both OSD and service analysts (who are normally required to augment their original data) but the kind of review a general manager gives a production report. As currently staffed, the OSD cost-effectiveness groups are actually checking every input, every cost and are no longer accepting “producer” estimates (I am referring to the service estimates—the Navy’s Systems Commands—-where personnel with years of cost experience have been and are employed to develop this expertise). Now OSD is hiring some of these same cost-estimating experts from the services and placing them in ivory towers where they will quickly lose their capability because they will no longer be close to the scene at the procurement/producer level.
In sum, this technique is viewed as one which offers the decision-maker—the admiral in the Naval Material Support Establishment producing a weapon or the admiral in OpNav where the need must be determined for that weapon—a more sophisticated input than the subjective inputs of the past. No longer will decisions hinge solely on questions of manufacturer reliability, or capability, or upon anticipated wartime need for the manufacturer’s capacity. Military judgment must include the input of a cost-effectiveness analysis, but the latter should not assume overriding dimensions and misdirect judgment—the tail should not wag the dog!