Not much, says a retired Pentagon official. Now, Deputy Secretary of Defense Gordon England (at podium) wants another comprehensive review of the process. But the author claims previous studies have produced sound acquisition directives and guidelines that have yet to be followed properly.
Department of Defense leaders are painfully aware of the fiascoes that continue to plague the acquisition of weapon systems. "Something's wrong with the system," secretary of Defense Donald Rumsfeld recently told Congress. Further evidence of that concern came in a 7 June 2005 memorandum from then-acting Deputy Secretary of Defense Gordon England. Addressed to senior Pentagon leadership, it directed a thorough assessment of the process "to consider every aspect of acquisition, including requirements, organization, legal foundations, decision methodology, oversight, checks and balances-every aspect." In kicking off the next in a long series of studies, secretary England stated: "Many programs continue to increase in cost and schedule even after multiple studies and recommendations that span the past 15 years."
Actually, several high-level efforts over the past 35 years have aimed at reforming the defense acquisition process. In some cases, these were responses to egregious mismanagement and other acquisition horror stories. While DoD's acquisition policies and directives adopted many of the most substantive findings and recommendations of these reviews, the people managing the process unfortunately too often lacked the will to carry through and implement them in program decisions.
Recurring Management Reform Efforts
Instead, every three or four years, high-level studies have reviewed DoD management in general and the acquisition process in particular. The 1970 Fitzhugh, or Blue Ribbon, Commission, was followed by the 1977 Steadman Review, the 1981 Carlucci Acquisition Initiatives, the 1986 Packard Commission and Goldwater/Nichols Act, the 1989 Defense Management Review, the 1990 Defense Science Board (DSB) Streamlining Study, another DSB Acquisition Streamlining Task Force in 1993-94, the Total System Performance Responsibility initiative of the late 1990s, and the early 2000s focus on Spiral Development and Capabilities-Based Acquisition.
The common goal was to streamline the acquisition process to reduce the burgeoning costs of new weapons. In so doing, these commissions and task forces hoped to drastically cut system development and production times (and thereby costs) by reducing management layers, eliminating certain reporting requirements, using commercialoff-the-shelf systems and subsystems, reducing oversight from within as well as from outside DoD, and eliminating perceived duplication of testing, among other initiatives.
What Went Wrong?
After all these years of repeated reform efforts, major defense programs are still taking 20 to 30 years to deliver less capability than planned, very often at two to three times the planned cost and schedule. These continuing problems are obviously making much worse the severe force modernization shortfalls that face the military services now and into the future. The loss and heavy use of equipment in Iraq and Afghanistan make the shortfalls even more critical.
Secretary England's memorandum acknowledges this sad state of affairs. While some applaud this latest attempt to confront this persistent problem, experience over the years has convinced many observers that the fundamental shortcoming in the process has been and continues to be the failure of the acquisition community-from program managers to senior decision-makers and their advisors-to implement and carry out the letter, not to mention the intent, of DoD's existing acquisition directives and guidelines. These include many of the critical findings and recommendations that emanated from the reform efforts mentioned previously.
Key Findings from Previous Studies
Some of these findings and recommendations are worth noting, as DoD undertakes its latest review. One key misconception should be dismissed right away. While oversight by government agencies and their reporting requirements can indeed be burdensome, they clearly are not the causes of the continuing miserable record of program stretch-outs and cost growth. This is true whether or not those agencies and their reporting requirements are internal to DoD-such as the Defense Contract Management Agency, independent cost analysis groups, and operational test and evaluation organizations-or those external, such as congressional committees and their staffs and the Government Accountability Office (GAO).
Instead, the 1990 streamlining study, covering some 100 major defense acquisition programs, reached a firm conclusion. Failure to identify technical issues, as well as real costs, before entering into full-scale engineering development-now referred to as engineering and manufacturing development-was the overwhelming cause for subsequent schedule delays and the resulting cost increases. To the extent oversight played any role in these delays, the discovery and reporting of test failures during development often called for additional time and dollars for system redesign, testing and retesting of fixes, as well as costly retrofits of those fixes.
Despite overwhelming evidence that oversight, per se, was not the cause of continuing cost and schedule growth, DoD leadership in the mid-1990s implemented a strategy known as total system performance responsibility for several key acquisition programs. In essence, this strategy relieved development contractors of many reporting requirements, including cost and technical progress, and built a firewall around the contractor, preventing government sponsors from properly overseeing expenditure of taxpayer dollars.
Several major acquisition programs contracted for their development and engineering activities under this much ballyhooed strategy. Consequently, many of these programs soon hit the headlines with huge technical and cost problems. For example, the Army's Theater Area Air Defense (THAAD) experienced several high-profile missile failures in development testing.
A high-level independent technical review of the program, undertaken in the late 1990s, found that the contractor, trying to maintain cost and schedule, had skipped or postponed some basic ground testing of the missile and its subsystems before proceeding with the doomed missile shots. When questioned by the independent review panel on how this had come to pass, the program manager stated that he had no contractual means to pressure the prime contractor, Lockheed-Martin, to carry out the planned ground tests. This review coined a most appropriate phrase, "rush to failure," to describe the sequence of events leading to the test fiascos.
Another example of a program that followed the total system performance responsibility model was the Air Force's Space-Based Infra-Red System-High, known as SBIRS-H, undertaken in the mid-1990s as a much-needed replacement for the Defense Support Program early-warning satellites.
The new system was to be fielded in the early 2000s to provide a much more capable missile-launch warning and tracking capability for defense against tactical and strategic ballistic missiles. Instead, the program continues to be wracked with technical problems, schedule delays, and huge cost increases. By early 2005, program cost estimates had grown by a factor of three from the initial estimates of $4 billion, and the scheduled launch of the first satellite was delayed into the late 2000s.
A second Nunn-McCurdy (legislation stipulating that DoD must report any cost overruns of 15% or more to Congress) program cost breach appears to be in the offing. The first program cost breach had been the subject of a Defense Acquisition Board meeting in early 2002. At that review, the program manager revealed that he had had no warning or insight into the contractor's growing technical and cost problems because of the total system performance responsibility nature of the government's arrangement with the prime contractor, Lockheed-Martin. Such arrangements have become known in some circles as performance-based acquisition management. (See sidebar.)
Underestimating Technical Problems Dooms Programs
One need only examine the history of three of DoD's largest and most controversial programs over the past 20-plus years to further reinforce the fact that undertaking major developments without understanding key technical issues is the root cause of major cost and schedule problems.
The Army's Comanche armed reconnaissance helicopter program began in 1981 as the LHX, planned at the time to replace the Army's fleet of UH-1 utility and AH-1 Cobra attack helicopters. After spending billions of dollars over two decades and undertaking several restructures of the program, the latter brought about by continuing technical problems and cost growth, the Army's leadership canceled the program in 2003.
The Department of Navy's MV-22 Osprey program has a similar checkered history. Initiation of the joint Army/ Marine Corps JVX program was approved in fall 1981, followed by approval the following year to enter the demonstration/validation phase. Later, a Milestone II review in 1986 approved the program's entry into full-scale development.
Begun as a much-needed replacement for the Marine Corps' aging CH-46 medium-lift helicopter fleet, the MV-22 completed its second operational evaluation in mid-2005 (having failed its first in 2000), a prerequisite for the full-rate production decision due shortly. In the meantime, the Marine Corps had procured more than 60 MV-22s in low-rate initial production over the past six or more years, running the risk of needing additional funding for these aircraft to incorporate fixes to problems uncovered in testing after their procurement. In any event, close to 25 years and about $15 billion later, it appears that the Pentagon has finally reached the point of replacing the 1960s-vintage CH-46s.
In a similar vein, a debate ensued in the early 1980s about the scope and requirements for the Air Force's advanced tactical fighter program. The Defense Acquisition Board review in fall 1986 approved entry into the competitive demonstration/validation phase for this program. The winner of that competition, the F-22, entered full-scale development in 1992 and was approved for low-rate initial production in 2001.
After encountering numerous technical problems, the F-22 finally completed its initial operational test and evaluation in 2004 and obtained approval for full-rate production in early 2005. Thus, 20 years and close to $40 billion after the program started, the F-22 achieved an initial operational capability in December 2005. Sold on the basis of a buy of more than 700 aircraft and a unit flyaway cost of $30 million to $35 million in 1986, the DoD budget now calls for a total procurement of 180 aircraft at three to four times the unit cost forecast during full-scale development.
Front End of the Process Sows Seeds for Future Problems
These programs also epitomize what observers have found to be a fundamental deficiency in the overall acquisition-the front end of the process. They have pinpointed the development and setting of requirements, both technical and operational, as sowing seeds for future problems. Among the proposed remedies in this area has been a repeated call for attainable, affordable, and testable requirements based on realistic threat projections and performance/cost tradeoffs that, in turn, rely on projections of realistic system life-cycle costs and force levels.
Unfortunately, the process has sanctioned, if not heartily approved, the appetite of the users for quantum leaps in capability reflected in high-risk, sometimes unattainable, technical and operational requirements. Many of these "reach-for-the-moon" performance goals have resulted from the salesmanship of the DoD research and development communities, combined with industry lobbying, in convincing the user that advanced capabilities could be delivered rapidly and cheaply.
This effort to sell a new program is the so-called "buy-in" syndrome, whereby cost, schedule, and technical risk are often grossly understated at the outset. In case after case, Pentagon decision-makers have endorsed programs entering full-scale development and even low-rate initial production before technical problems are identified, much less solved; before credible independent cost assessments are accomplished and included in program budget projections; and even before testing demonstrates the most risky requirements.
The process reviews mentioned previously have repeatedly found that we should "fly and know if it works and how much it will cost before buying." Several of these studies have recommended building and testing competitive prototype systems and subsystems before proceeding with full-scale development. In that same vein, the reviews have called for up-front funding of ambitious efforts to demonstrate technology maturity as a prerequisite for program approval. DoD's acquisition policy and directives have incorporated these recommendations.
Unfortunately, the rising operating and support costs of the existing forces and the fact that more acquisition programs are being pursued than DoD can possibly afford in the long term have combined to intensify the competition between programs for dollars. This, in turn, has led decision-makers to sanction low-balled program costs and overly optimistic schedules at the outset of major programs, most often at the expense of building and testing prototypes and critical technology risk-reduction efforts.
Schedule-Driven Versus Event-Driven Strategies
The MV-22 is a good example of a major program that encountered technical and cost problems in development, yet attempted to hold to a schedule that provided little, if any, slack to address those problems. Clearly, after nearly 20 years in development at the time, the urgency of replacing the aging CH-46s drove decisions to severely reduce development testing to save dollars and stay on schedule.
The official investigation into the tragic accident that occurred in April 2000 drives home this point. That accident involved the crash of an MV-22 during an operational test mission and resulted in the deaths of 19 Marines. The official report from the investigation states that three test events were flown as part of the MV-22 EMD Flight Control System Development and Flying Qualities Demonstration Test Plan, investigating the phenomenon known as power settling.
As the report notes, the original plan called for 103 test conditions to be flown. In an effort to recover costs and schedule, the conditions to be tested were reduced to 49, focusing on aft center-of-gravity conditions that were thought to be most critical. Of the 49 conditions, 33 were actually flight-tested. Thus, roughly one-third of the planned test events were actually flown, and particularly critical test points were not flown at all.
This series of events, culminating in another crash in December of that year, brought the program to a halt, nearly resulting in termination. In the end, the MV-22 program recovered, executed the full range of technical testing that should have been done previously, and now appears to be on its way to introduction into Marine Corps medium-lift forces, nearly 25 years after the decision to initiate the program.
A Defense Science Board Task Force in 2000 found that, in the previous four to five years, 80% of Army systems had failed to meet even 50% of their specific reliability requirements in operational testing. Nevertheless, some of these programs proceeded into production. The task force found similar problems with programs in other services.
More recent experience shows that, with all the streamlining of the acquisition process, the number of systems failing to meet reliability requirements continues to be a major problem. For example, of the 14 systems for which the Pentagon's director of operational test and evaluation submitted beyond low-rate initial production reports to Congress in 2004, two systems were found not operationally effective, and seven were found not operationally suitable. The trend in suitability results is disturbing, as more systems are going to the field, despite being unsuitable as tested.
Discipline Is Needed
The bottom line is that DoD's basic acquisition policies and directives are fundamentally sound. Unfortunately, many major acquisition decisions have not reflected adherence to them. In most cases, by the time technical and cost issues come to the fore, few, if any, of those involved in the process will admit they were wrong, cut their losses before inevitable further cost growth and schedule slips, or demonstrate much-needed discipline by making an example of program officials and their contractors who have sold the department and the taxpayers a bill of goods.
By the time these problems are acknowledged, the political penalties incurred in enforcing any major restructuring of a program, much less its cancellation, are too painful to bear. Unless someone is willing to stand up and point out that the emperor has no clothes, the U.S. military will continue to hemorrhage taxpayer dollars and critical years while acquiring equipment that falls short of meeting the needs of troops in the field.
Any recommendations for fixes to the much-maligned defense acquisition process would start with enforcing the existing directives and instructions that supposedly govern the process. They are the product of numerous high-level, most often insightful, reviews of that process stretching over some 40 years.
Most knowledgeable observers of and participants in this process have already identified most problems and proposed solutions for them. Pointing fingers at oversight agencies in the executive and legislative branches for the lengthy times from program starts to deliveries to the troops in the field does not address the root causes for those schedule slips. Neither does the cyclical invention of acquisition strategies with catchy buzzword titles come to grips with those root causes.
Hard-nosed discipline on the part of decision-makers at the front end of the process should rein in the appetite of the requirements community and preclude launching a major system development that rests on immature technologies and overly optimistic projections. Realistic independent cost estimates and technical risk assessments, developed outside the chain of command for major programs, should inform the defense acquisition executive about the viability of a new program's cost, schedule, and performance projections.
The decision authority should impose an event-based (as opposed to a schedule-based) strategy on the program to include meaningful and realistic "exit criteria" for each stage of development and production. Only if these criteria are demonstrated and satisfied should the program proceed to its next stage.
Of critical importance is demonstrating the technical maturity of the technologies embedded in a new system development prior to proceeding into accelerated development. Sufficient up-front funding and time for effective system and sub-system prototype demonstration and testing should be programmed to ensure an informed decision concerning the technical risk entailed in proceeding.
In summary, more informed management attention and discipline at the front end of the process should go a long way toward solving many of the problems plaguing defense acquisition. Nothing is new here. Time and again, major defense management reviews have reached the same conclusions. It is high time that decision-makers take seriously these findings, most of which are embedded in existing directives and instructions that govern the acquisition process, and make them an integral part of their program-review and decision process.
Mr. Christie retired in 2005, having served since 2001 as the director of operation test and evaluation, DoD's's most senior official responsible for testing weapons. Over the past 50 years, he has served as a weapons analyst and head of the Air Force's weapon systems analysis division, director of the tactical air division in the office of the assistant secretary of defense for program analysis and evaluation, deputy assistant secretary of defense for general-purpose programs, and head of the program integration office for the under secretary of defense for acquisition. In 1989, he headed the operational evaluation division at the Institute for Defense Analyses.