New FitRep System Doesn't Cut It

By Captain T. Q. Donaldson, U.S. Navy

Many officers were affected by that similar transition period 22 years ago. They felt that their careers either ended prematurely or were redirected by an inequitable and inconsistent interpretation and implementation of the then-new policies. I have had many conversations with both reporting seniors and their constituents, who now express concern about the "other guy" not playing by the new rule book. Some things may have changed in the interim, but concern and anxiety associated with implementing change remains a constant.

Today, we are beyond the point of discussing whether a complete overhaul was required. Considerable thought and effort have been invested in the new process. More than 60 officers and sailors in various working groups developed six different approaches and proposals. Six months of education, advertisement, and advance notice were delivered to an organization that has a habit of following orders—there should have been no doubt that the new year brought the Navy a new performance evaluation system. Nevertheless, only a limited number of people know how well the new system is working so far.

Under the new system's initial rules of engagement, the key to recalibrating the FitRep/evaluation verbiage and professional marks was the return-to-sender authority granted to the Bureau of Naval Personnel (the Bureau); to return those reports that appeared inflated. From the outset, this placed the Bureau in a no-win situation—expected to police and enforce the new system without telling commanding officers how to evaluate their people. The fleet looked for feedback on whatever constituted an acceptable report, but this was something the Bureau could not give without tipping its hand and revealing its minimum acceptance standards, which inevitably would drive up the next round of evaluations. Still, the new policy might have succeeded—had the Bureau stuck to the plan and allowed the process to work. Early on, however, the return-to-sender policy was revised. The driving concern that led to this change in the return policy was unwillingness to embarrass a reporting senior who might have written a report with unknowingly "inflated" marks and counseled the individual reported on, only to have the report returned by the Bureau for being "not properly written."

This lack of an enforcing mechanism to change the mind-set of the entire work force leads to the following observations:

  • The system is based on a standard that is not being applied uniformly throughout the Navy. This became obvious when the January and February reports were received by the Bureau. A significant percentage of these reports were identified as inflated, and should have been returned. The reports were expected to be on the high end of the scale, because they were primarily detaching reports and some commanding officers were testing the acceptance scale, knowing they'd get a second chance if the reports were returned as inflated.
  • The inconsistent standard continues to be the case, because the Bureau is one command holding to the letter of the law when applied to in-house counseling and evaluations. On the other hand, I received calls every week from individuals concerned that one (and only one) 4.0 performance trait mark would be the discriminating factor in a future selection board.
  • The Bureau continues to review all reports for inflation but is powerless to enforce the new philosophy or standards. The mechanism that could have made the new system work has been replaced by an administrative slap on the wrist. Countless man-hours are being spent as inflated reports are identified, but then filed in the system without serious repercussions to the reporting seniors. The Bureau went to great lengths to improve the old system, inform and educate the Navy on the new requirements, and then implement a quality-control mechanism to enforce the new standards. For those commanding officers who refused to adhere to the new standards, the return-to-sender policy was in place to educate them, but the process was never allowed to work.
  • The new process strictly prohibits ranking and handwritten comments—two of the best tools provided to past selection boards. Under the old system, these tools were useful because everyone received all As and were recommended for early promotion. In fact, the old system worked because a B or a regular promotion recommendation became a clear negative signal to a selection board. With the high quality of individuals present in the Navy today and promotion limits set by law, any discriminating information—good or bad—was greatly appreciated by selection board members.
  • On 15 April, the new reporting senior's cumulative average was added to the performance summary report. It will take at least two reporting cycles to establish a track record of valid reference points. The recent addition of this information will go a long way to level the playing field, by identifying the easy graders. It is absolutely necessary in any organization that is not being held uniformly accountable to a single grading standard. Until our senior officers establish their individual grading track records, the selection boards will have to judge the relative value of the report's summary trait average the way they do now—on the professional reputation of the reporting senior.
  • The new form has reduced the space available for performance documentation by 50%. The smaller space provided for specific comments and documentation and the stricter guidance for format and style will reduce the number of adjectives used and eliminate the flowery introduction and summary paragraphs (a definite improvement). Block 41, comments on performance, remains the best location to provide information to future selection boards on the promotability of an individual. Only time will tell whether we have reduced that block too much.

We should not lose sight of why we have a performance evaluation system—which, according to some, "discourages risk-taking, builds fear, undermines teamwork and pits people against each other for the same rewards." The value of the process is threefold: It provides documented feedback during mandatory counseling on both individual and relative performance; It is the primary information vehicle by which selection boards determine an individual's promotability; and it provides valuable information for proper career management and detailing/placement of individuals. As long as advancement numbers are controlled by the Defense Officer Personnel Management Act (DOPMA) flow points and percentages, we should focus our efforts on an evaluation system that provides better feedback to the individual, enables selection boards to identify and select those best qualified for promotion, and assists the Bureau in matching the best qualified to available or upcoming assignments.

The following recommendations are provided to ensure that the Navy's performance evaluation system in the 21st century is truly an improved process:

Reinstate the Bureau's authority to enforce the new standards and return inflated reports. Hold commanding officers accountable for their inability or unwillingness to support the new process. If their reports get caught in an endless round of return-to-sender, make it clear that the reporting seniors are responsible for the reports not reaching selection boards in time. Prevent these commanding officers from submitting letters to the board presidents as a means of circumventing the system. Take the rest of 1996 to get the message across—do whatever it takes to convince the organization to believe in the new standards and support the new system. Use 1997 to make believers out of the non-believers. Starting with the FY98 commander and E-8 selection boards, do not allow inflated reports to be considered by the board. Allowing for records to be pulled six weeks prior to a board convening, we have five months for the commander reports, and three months for the E-8 reports, between the end of the evaluation period and the time the reports need to be accepted at the Bureau. Identify inflated reports by social security number in message to the reporting senior and keep statistics on the number of returned reports. If commanding officers jeopardize individuals in zone for FY98 boards, notify their reporting seniors of that fact—by message—at least one month prior to the board convening. It's worth repeating: Do whatever is necessary to recalibrate the grading scale to 3.0 being the standard and being promotable.

  • Improve the performance traits grading options by allowing a one-tenth grading scale; allow reporting seniors the option to mark in tenths instead of limiting the marks to whole numbers. The wider numerical range of performance trait grades will result in more accurate feedback to the individual and assist selection boards in the "crunch zones."
  • Allow ranking of individuals in the same reporting group and control the process in the same manner in which we now limit the promotion recommendations. We are in a competitive business with imposed limitations on the number of individuals allowed to be promoted to a higher grade. When 40% of the non-selectees to captain either are in command or are post-command (as was the case in one unrestricted line community this year), the selection boards need all the help they can get in the monumental task of selecting those best qualified for promotion. Permit a reporting senior the option of identifying the top performers and devise a control mechanism which prevents them from ranking more than one officer as "my number one officer."
  • It's often been said that a single good evaluation report can't ensure that you will get promoted—but a poor report can ensure you won't be selected. We should view the 1996 reports as only a half-successful attempt to transition to a new performance evaluation system and should brief them as such in all future selection boards. Leave it to the board members to glean whatever useful information they can from those reports and—over the long run—consider a one-year delay in implementing the new standards a relatively small price to pay for recalibrating a skeptical military corporation. If we consider 1996 to be a neutral year, it won't be until the 1999 selection boards convene that the new evaluation reports will document more than 50% of an individual's current grade performance. We need to get it right before then.

The new performance evaluation system moves us toward the goals of: controlling grade inflation; grading against clear standards; emphasizing teamwork and contributions to command missions; establishing mandatory career counseling; and simplifying the report/counseling forms. We made a giant step forward with its implementation in January, followed by two large half-steps backward when the organization failed to comply completely with the new directives and the Bureau lost its authority to return inflated reports. For those who think that a second chance is unnecessary—or that these recommendations are too difficult to implement—think again. The need to motivate all sailors and officers throughout their careers while simultaneously identifying those best qualified to advance demands the very best performance-evaluation process we can devise.

Captain Donaldson was the senior detailer and community manager for the Meteorology and Oceanography (MetOc) community from 1994 to 1996. He now is attending the National War College.

 

 
 

Conferences and Events

2014 U.S. Naval Institute History Conference

Wed, 2014-10-01

The 2014 Naval History Conference is hosted by the U.S. Naval Institute and the U.S. Naval Academywith support from The William M...

Defense Forum Washington 2014

Newseum - Knight Conference Center

2015 WEST Conference

Why Become a Member of the U.S. Naval Institute?

As an independent forum for over 135 years, the Naval Institute has been nurturing creative thinkers who responsibly raise their voices on matters relating to national defense.

Become a Member Renew Membership