On my first ship, battle drills were always the same. They began with a lookout reporting aggressive small boats off the port side. As sailors manned the machine guns, I would head to the bridge to coordinate their action. A chief would hold up a sequence of pictures showing the small boats approaching. When weapons appeared in the pictures, I would give orders to open fire. Each mount would misfire in turn and be cleared by its crew. As the last misfire cleared we would destroy the boats. Almost immediately, the ship would discover an enemy submarine and set general quarters. I would take off my headset, relieve the junior officer of the deck, and assist with torpedo evasion. The machine-gun crews would scurry to their own battle stations deep in the ship’s missile launchers and magazines. We did not have enough sailors to simultaneously man the main weapons and the machine guns, and we needed the main weapons for the next part of the scenario. If the boats had been real and more had appeared, I am not sure what we would have done. This training helped us pass our inspections, but I wondered if it really prepared us to fight.
For 25 years, the U.S. Navy has patrolled the seas virtually unchallenged. Now, it faces a competitor building the capability to contest U.S. dominance and possessing the ambition to do so. Combat readiness is always a priority, but it has become a surprisingly frequent topic of concern at the highest levels. The Secretary of Defense and the Joint Chiefs of Staff have testified before Congress as to the expected damage to readiness from continued budget reductions. Extended deployments and budget cuts have already left the surface fleet struggling to meet requirements. Vice Admiral Thomas Copeman, former Commander of Naval Surface Forces, has repeatedly called attention to the Fleet’s readiness deficit. Admirals Bill Gortney and Harry Harris have explained how inefficient planning and insufficient resources have damaged readiness and their plan to repair it.
In the surface force, remedying resource shortfalls and standardizing deployments are necessary but not sufficient to address our problems. A readiness system must create incentives to be ready. Effective incentives require discerning the ready from the unprepared. Making such an evaluation requires a system that finds the naked truth. The surface Navy’s inspection and self-reporting system fails to do that. It focuses on process instead of performance and is easily gamed. Only a system that accurately measures readiness and provides incentives for improvement will ensure our Fleet is ready to fight.
Sustaining a Culture of Quality
The armed forces define readiness as “the ability of military forces to fight and meet the demands of assigned missions.”2 In wartime, potential combat provides the motivation for and evaluation of battle readiness. Combat is the ultimate human competition. Recent research demonstrates the link between competition and effective management in business. The Economist summarized recent findings:
Quality of management seems to be clearly related to the competitiveness of the markets businesses operate in. That is why American firms are especially well-managed: The country’s competitive business climate drives out badly run firms and rewards well-run ones. Multinationals that must cope with a wide variety of competitors tend to be well-run; public-sector firms, and family firms with strong political protection, do not.3
The stakes of armed competition outstrip any in business competition. The armed services tout a culture of ironclad responsibility and results driven by these stakes, but in peacetime no natural equivalent to the competition of combat exists. The surface Navy has seen few engagements in the past 70 years, and technology has tamed many of the sea’s dangers. As a result, warships are relatively insulated from the competition needed to sustain a culture of high standards.
Unused skills atrophy. To maintain proficiency, the services must design systems to promote readiness. Conversations about readiness must include maintenance, training, home time, and the Herculean efforts needed to coordinate and resource these activities. Providing resources, however, is not enough. A readiness system must create incentives for units lacking the natural motivation of impending combat to maintain their ability to fight and win.
The Surface Force Readiness Manual provides the outline.4 The surface Navy assesses its readiness in two ways: self-reported self-assessment and outside inspections. Both are necessary.
A unit must be able to self-assess. A commanding officer must know his ship’s weaknesses to correct them. No higher headquarters can constantly monitor all its units. Readiness self-reporting has existed for decades, particularly for materiel and supply issues. Ships send regular updates of their food, fuel, and water stocks. Casualty reports announce major equipment problems. Ships self-report the periodic training requirements of the Fleet Exercise publications.
In recent years, systematic self-reporting has become an integral part of readiness evaluation. Higher headquarters use web-based tools like TORIS, CV-SHARP, and DRRS-N to obtain readiness data from across the Fleet. These tools develop overall readiness scores by tracking details such as completion of individual drill-card objectives by specific watch-team members for each drill occurrence within a given periodicity.
Despite the dramatic expansion of self-reporting requirements, these evaluations alone cannot provide the incentives needed to produce a ready unit. The College Board does not let test-takers grade their own SATs. For most, the temptation to cheat would be too great. Similarly, the Navy cannot expect commanding officers to provide completely accurate evaluations of their commands if such evaluations could affect those officers’ careers. Only a superhuman could provide absolutely honest results in such circumstances, particularly in the world of increasingly constrained resources our ships face. More detailed reports will not change this reality. Cumbersome reports, inadequate resources, and self-evaluation are a recipe for self-deception. Building a readiness evaluation system that creates incentives to maintain readiness must rely on outside inspections.
The Cart Before the Horse
Ships face 466 separate inspections that grade materiel condition, administration, and crew ability.5 These evaluations can involve inspection of paperwork, equipment operation, supply inventories, written examinations, and drills. These measures fall in two categories: process and performance. Process measures verify whether ships conduct their programs in accordance with established standards by reviewing the records those programs generate. Performance measures evaluate a ship’s capabilities through direct observation of drills and equipment operation.
Each method has strengths and weaknesses. Drills and evolutions directly demonstrate a ship’s capability, but only show a snapshot in time. They do not provide direct information about compliance with rules for program administration. Because drills are live performances, they are subject to luck.
Procedural reviews provide information about prior efforts but say little about current condition. If a command performs poorly during drills, procedural reviews can provide evidence of preparation, but procedural records can be forged.
Ultimately, combat is about performance, not process. Readiness, too, must be about performance. While the right process increases the likelihood of successful performance, process is only useful to the extent that it actually produces better performance. Process matters, but a successful readiness inspection system should prioritize performance. Alas, the current system does not.
Inspections today focus on process at the expense of performance. As a result, inspectors can pass ships that may not actually be ready. The recently discarded “Train the Trainer” system provided an egregious example. Inspectors evaluated ships’ training teams rather than actual watch-standers. Ships could pass some inspections even if watch-standers could not perform basic tasks as long as the training team appeared competent to correct the problem. In this system, process triumphed over performance. Ships could receive an “up check” without being ready.
While current inspections evaluate watch-standers, many features remain that allow ships to pass inspections while lacking the resources or training needed for real operations. Too many inspections focus on administration and neglect performance. Inspectors spend countless hours evaluating gundeckable paperwork and spend less time evaluating actual ability. Two examples illustrate this imbalance: Maintenance and Materiel Management (3M) inspections and the Conventional Ordnance Safety Review.
During 3M certifications, inspectors spend most of their time on paperwork. The most challenging part of the inspection, however, is the maintenance “spot-checks” section, which requires re-performance of previously documented maintenance, a performance evaluation. Even so, while spot checks are better than pure paperwork reviews, they are not effectively linked with materiel inspections. Since the purpose of maintenance is to maintain equipment, materiel inspections are the ultimate performance evaluation of maintenance-system implementation.
Similarly, the Conventional Ordnance Safety Review requires extensive paperwork reviews. Inspectors spend significant time examining the documentation for ordnance-handling qualifications. They do not observe sailors actually handling ordnance. While proper administration of programs is important, its presence does not prove a unit’s ability to perform. Moreover, because paperwork can easily be faked, administrative review without performance review is essentially meaningless.
Gaming the System
In the limited situations in which inspectors do evaluate performance, their methods are often inadequate. The Integrated Training Team (ITT) scenario is the most complex drill any ship endures. In these drills, ships simulate battle and must at the same time exercise their ability to fight, navigate, and respond to damage. Such a scenario should be the ultimate test of combat readiness, but poor system design allows ships to game the process. Type commanders specify the elements the inspection drill must include, but ships write their own packages. Because the required items are standard, ships write one drill package and rehearse that exact scenario countless times. Ships are expected to train this way. Early in the training cycle, the crew meets with its inspectors to discuss drill-package design for the cycle. The result is obvious. Instead of actually evaluating a ship’s ability to fight, the ITT scenario evaluates the crew’s ability to stage-manage a show. The same problem exists with force-protection drills.6
Worse, because ships design their own drills, they can hide their weaknesses. My first ship designed its scenario to conduct specific elements prior to setting general quarters because the ship did not have enough sailors to man all weapons while at battle stations. Almost by definition, that ship’s combat capability was degraded. The ship was not at fault for its undermanning, but in seeking to pass the inspection by hiding the problem, it created greater risk for itself. It also denied higher headquarters the feedback needed to fully understand the impact of manning shortfalls.
The Navy has recognized that ships gamed their performance-based materiel inspections. The Board of Inspection and Survey (INSURV) now prevents ships from using outside manpower and cannibalizing other ships for parts. Ships must “come as they are.”7 The Navy made these changes because the previous rules failed to accurately assess the Fleet’s materiel condition. The same problem still exists in the evaluation of training.
Toward a Better Methodology
An effective readiness evaluation system must solve two problems. It must accurately evaluate the capability of a unit, which requires minimizing opportunities to cheat, and it must create an incentive for that unit to become more ready. Four principles will help a system accomplish these ends.
1. Maximize the inspection of performance. Combat is about performance and so must be readiness. Measures of performance must align as closely as possible to the actions required in actual operations. Materiel inspections have begun this transition. INSURV has replaced the “fire all guns” requirement with a gunnery detect to engage sequence. Training inspections must follow. Process evaluation should cover only those aspects that inspectors cannot evaluate using performance. We must remember that the purpose of process is to produce performance, and so performance is the ultimate judge of process. Because performance is subject to luck, such a shift will require the opportunity for second chances. Ships may need multiple opportunities in the course of an inspection to demonstrate their proficiency. A single performance failure does not necessarily demonstrate inadequate readiness. Repeated failure does. Similarly, failure due to insufficient manning or other conditions outside crew control should be grounds for withholding a ship’s certification but not for punishing the command.
2. Use surprise appropriately. In combat, a ship cannot pick where she takes a hit. The crew should not be able to do so in an inspection. While inspectors should clearly communicate requirements and expectations, the ship’s crew should not have specific foreknowledge of drill scenarios. This idea is not new. Under the Cold War Refresher Training system, inspectors picked and ran the drills. If an inspecting agency desires to evaluate a crew’s ability to train itself, it should provide desired drill elements to the ship at the in-brief and require a drill package to be submitted within 24 to 48 hours. The Nuclear Propulsion Examination Board uses this system to evaluate both training teams and watch-standers.
3. Use self-reported evaluations for information, not evaluation. To ask a person for information and then use that information to attack that person encourages duplicity the next time, yet current applications of computerized readiness reporting systems do exactly that. A ship must self-evaluate to succeed, but if higher headquarters want the results, they must use them to help. Reporting of inventories and materiel condition improves the supply and engineering process. Reporting the status of manning and school requirements provides feedback on training and personnel management systems. Unfortunately, computerized readiness tools cannot currently make the best use of the information they contain. DRRS-N lacks the ability to look horizontally to find systemic problems. The system can report that a specific ship is missing graduates of a specific service school, but it cannot easily determine the most common shortfall across the Fleet. Such computerized systems should be modified to use the information they already contain to find system-wide discrepancies, so headquarters can focus on their correction.
Leaders should seriously consider reducing reporting on shipboard training. Such reporting imposes significant administrative burdens while providing no externally evaluated information about a ship’s capabilities. Do fleet commanders need to know which apprentice seaman was the nozzle-man on Hose Team One of the USS Nimitz’s Repair Locker Two? Current systems demand manual input of such specifics. These detailed requirements limit commanding officers’ opportunities to set their own priorities and innovate. Senior leaders make resource decisions about supply, engineering, and manning. They do not make decisions about the scheduling of unit-level training evolutions. Commanding officers’ responsibilities should be left to commanding officers.
4. Align incentives to promote readiness. Inspecting performance, using surprise, and relying on outside evaluators will help the readiness system find the naked truth. To motivate success, the Navy must recognize excellence and sanction incompetence. Originally, the Battle ‘E’ went to the ship with the Fleet’s best gunnery score. Today, the Battle ‘E’ recognizes the “best” unit in a competitive group, but to be eligible, that unit must first complete a ponderous series of process-related requirements. The Navy should create a new award based purely on performance-based readiness scores to recognize distinction. This award should require no work for nomination beyond completing necessary inspections.
The Navy must also demonstrate that it takes readiness standards seriously. Since October 2011, the Navy has fired 23 surface-warfare commanding officers.8 Because the Navy releases limited information about firings, determining the causes from open sources is difficult. Three reliefs appear related to readiness, but none appeared due to failed inspections. On its face, this small number seems to suggest that almost all Navy ships are ready, but a deeper examination may suggest the current system is deficient. Initial reports on one CO’s firing cited poor inspection performance after a deployment, but later information showed the CO’s highly unusual conduct during deployment was the principal cause for his relief.9 One CO lost his command for “deficiencies in operational preparedness, situational awareness, and tactical proficiency” while deployed.10 Effective readiness inspections may have caught these problems during the training cycle. The other CO was fired for “poor performance including his failure to establish a culture of procedural compliance within the command and his inability to correct identified deficiencies as directed” during the training cycle.11 Despite a relief rationale that specifically cited the command culture, the ship was certified to deploy on time immediately after the CO’s removal. Could such a culture be changed so quickly? In any case, the Navy must recommit to holding commanding officers accountable for the combat readiness of their ships lest it replicate the problem of the late 1950s Army, in which the only way to get fired was to do something that “embarrass[ed] one’s institution,” such as an extramarital affair or other personal misconduct.12
Essential Reforms
For the first time in two and a half decades, the U.S. Navy cannot take for granted its control of the sea. Readiness concerns top the priority list for senior defense leaders. Thus far their discussion has focused on securing additional time and resources for training and maintenance. While these efforts are critically necessary, they are insufficient. Improving readiness requires improving the system used to evaluate readiness. Effective readiness systems must create an incentive for units to prepare for war even when it is unlikely. To do so, the system must uncover a unit’s true level of preparation. Regrettably, the surface Navy’s current inspection scheme fails to do so. Because it focuses on process at the expense of performance and because its design allows for easy gaming and cheating, the current system can find ships ready when they are not. Improving the system can start through reform that focuses evaluation on performance, incorporates surprise, uses self-reporting appropriately, and aligns incentives. Importantly, the Navy can implement these essentially procedural reforms at little to no cost. Indeed, the only additional cost the Navy might see would be that needed to make ships found unready prepared to fight. We should be eager to bear that cost.
1. ADM Bill Gortney and ADM Harry Harris, USN, “Mind the Gaps,” U.S. Naval Institute Proceedings, vol. 140, no. 5 (May 2014), 36–40.
2. Department of Defense, Joint Staff, Joint Publication 1: Doctrine for the Armed Forces of the United States, March 2013, www.dtic.mil/doctrine/new_pubs/jp1.pdf.
3. “Schumpeter: Measuring Management,” The Economist, 18 January 2014, 69.
4. Department of Defense, Commander Naval Surface Force U.S. Pacific Fleet/Commander Naval Surface Force Atlantic, Surface Force Readiness Manual, March 2012, www.dcfpnavymil.org/Library/tycom/3502.3%20Surface%20Force%20Readiness%20Manual.pdf.
5. Vago Muradian, “Interview: Adm. Bill Gortney, US Fleet Forces Command,” Defense News, 30 April 2014, www.defensenews.com/article/20140430/DEFREG02/304300028/Interview-Adm-Bill-Gortney-US-Fleet-Forces-Command.
6. LT (JG) Matthew R. Hipple, USN, “‘Choreographed’ Training is Dancing with the Devil,” U.S. Naval Institute Proceedings vol. 138, no. 4 (April 2012), 12.
7. Department of Defense, U.S. Fleet Forces Command Public Affairs, “Navy Implements Changes to INSURV program,” 3 January 2013, www.navy.mil/submit/display.asp?story_id=71318.
8. Cid Standifer, “Updated: Navy CO Firings,” USNI News, 3 December 2013, http://news.usni.org/2013/12/03/navy-co-firings lists firing through December 2013. Military.com’s “Relieved of Command page” (www.military.com/topics/relieved-of-command) links news stories documenting firings in 2014.
9. Seth Robson, “3rd Cowpens CO Fired Since 2010; CMC Relieved,” Stars and Stripes, 11 June 2014, www.military.com/daily-news/2014/06/11/3rd-cowpens-commander-fired-since-2010-cmc-relieved.html. David Larter, “Cowpens’ bizarre cruise,” Navy Times, 4 August 2014, www.navytimes.com/article/20140804/NEWS/308110012/Cowpens-bizarre-cruise.
10. “MCM ship CO sacked for on-the-job failures,” Navy Times, 7 November 2013, www.navytimes.com/article/20131107/NEWS/311070028/MCM-ship-CO-sacked-job-failures.
11. Sam Fellman, “Frigate CO Fired for ‘poor performance,’” Navy Times, 15 February 2013, www.navytimes.com/article/20130215/NEWS/302150315/Frigate-CO-fired-8216-poor-performance-.
12. Thomas E. Ricks, The Generals: American Military Command from World War II to Today, (New York: Penguin, 2012) 213–14.