This html article is produced from an uncorrected text file through optical character recognition. Prior to 1940 articles all text has been corrected, but from 1940 to the present most still remain uncorrected. Artifacts of the scans are misspellings, out-of-context footnotes and sidebars, and other inconsistencies. Adjacent to each text file is a PDF of the article, which accurately and fully conveys the content as it appeared in the issue. The uncorrected text files have been included to enhance the searchability of our content, on our site and in search engines, for our membership, the research community and media organizations. We are working now to provide clean text files for the entire collection.
^°> it isn’t true that the fitness reports of naval officers put 90% of them in the top 10%, but virtually everyone agrees that there is a tendency to inflate evaluations. The mythical lieutenant Commander Greatjob, whose fitness report is shown here and on page 52, aPpears to be the Navy’s finest officer, but four nien on board his ship, the equally mythical Cruiser Scranton, did even better than he. As thoSe involved in selection and assignment (inset) have learned, part of the problem stems fr°ni the form itself.
^ he fitness report stands at the center of the U.S.
avy s officer personnel system. Whenever decisions k made about promotion or assignment, they are i ased on fitness reports accumulated over an officer’s
r n 1
er- Naturally, the quality of such decisions de- P^nds on the quality of the reports themselves and on . e astuteness with which selection boards and as- k'gnrnent officers interpret them. Fitness reports can r- Vlewed as letters from a series of commanding of-
Oo ICTlCIo 11U111 a oCllCo UI LUlllUldliuiug v-'1
t^ers t0 future selection boards and detailers, letters at are structured so that later readers can easily as- 1 ate reports that span an entire career. There is question that the Navy must have such a report-
no
ing
corded
system so that periodic evaluations can be re-
*n a relatively systematic way.
fit
^Ut granting the essentiality of periodic reports of ness *s not the same as saying that the current fit-
feeSS reP°rt form has been designed to do the job ef- ctively. in fact( a ciose look would reveal that the k ent report is really a poor evaluation instrument
DecaUS(
senio;
e it places unrealistic demands on reporting
rs and fails to serve the needs of its ultimate
users. We should probably be suspicious anyway of a one-page form that requires 43 single-spaced pages of instructions.
But before focusing on the report itself, we should establish the essentials of fitness reporting by examining those decision points in the personnel system where the report will be used—selection and assignment. In both cases, the decisions involve comparisons about expected future performance. How would Commander X perform as a captain? How would Lieutenant Y perform in a job he has never had before? And perhaps more importantly, how would their performance compare with that of other officers then being considered?
To be useful, the fitness report must therefore be contrived to provide reliable information on which to base such comparative and forward-looking decisions. In the present report form, there are attempts to grade an officer in comparison with officers of the same rank in the same competitive category with the same time in grade. The report requires marks on at least nine specific aspects of performance (41 when the worksheet is used), plus assessments of six personal traits. Only once does the reporting senior provide any clue about his standards, by disclosing a small current sample of the officer’s competitive population. Readers of fitness reports doubtless consider this “mission contribution” comparison as the most useful information in the fitness report.
As for predictions about the future, the current report asks for recommendations for promotion and for desirability in certain types of important billets. With all this information, together with lengthy comments, one should expect the fitness report to do an admirable job of supporting important personnel decisions. Why, then, is the fitness report form so unsatisfactory? Why does it result in a highly skewed distribution?
There are three reasons:
^ The report asks too many minor questions instead
CHARACTER.
report
p officers
"GRADE I I
REPORTED
rNA»^(LAS
rE^tjob
V”acouTra/
TEMAC
"middle'
.HAR^t ■sBPTsTffSR
1l1nSS SCR
3HSjiiy5> -r
’° °t&«Tm<^
deiachment OF OFFICER
-repor'
9 FREquent
3 INFREquent
"of'co^'
’EG- \1 ini CUR!!lilj-^:^REVERSri
oVSiS^555^" R TRUNli
-workshEE
WOR^NG
relations
35 EofUfoRF«^
lUlP » maT6B‘
LMANAGE SPEAKING
ability
OB-
SOPPOBT
WRITING
ABILITY
T«RRF»RrsPEC,''Y
'32^JLi
t^-SUBSPECiAL
ffiCtT(FR3“
139 AIR- U«NSHIPN_
□f8irf“e:\ I II
QUENT I IJL;
BbKsheeT 150 PERFOR \ MANCE
BOTTOM
m,ss,on contribution
^valuation
MARG:
---- TOP |
|
| 50 T . 1—T ' -p I 1 |
r^rV55!1 1 3 1 1 1 1 | 30%J |
|
|
|
|
|
SUMMARY
Wndoffe^
rr3 «wrt
354. CON*
SlSTENT
'DESTRABIL'TY'
57 COM-
•SECOMMENDRTiW
lpS-182.EARl.TU
V-67. JUDO- l MENT A____
\ MAND
-ftesmioir
f}63^
pTitTocRCO
1 68
RTION
"OCR^
« OPERATIONAL
,mproving
iqoeTrom"
JOINT/
foreion
SHORE
RANKING
■RANKINS'
"FROM
----- 1 64 NO* 1 |
| 71.FORCE - i FUU | ||
|
|
|
| L___JNESS__ |
attached
n^JEMENT
UU.ITAR7
BEARING
JO NOT
desired
-SeTKRessesTfscu5
ul NOTED____________
TTi2^ser»esv
Apprised of
eTSoNATUB^
KNOWEEOGE
FORMANCE^-.
OFFICER I HAVE 5
IhsnaTuR6
?55TS^S^r
7op^aboe°
incurrent
SHSiStTiKc'*1-
SIGNATURE
7oB«ARDt0
M»VRtR» '
qur-OOV.-n-
OETACH work
Comments
go NQT
are TeQUi' ATTACH
)9/H 01«LLF*7**11n tM/O
skbss*5®
RECORD ai
OFFICER copies-
of a few major ones.
► The report requires reporting seniors to make extraneous judgments that add uncertainties to the grades they assign.
► The fitness report form offers reporting seniors no incentive to resist the many pressures to inflate such grades.
The report is a safe haven for anyone inclined to believe that a large number of minor decisions is an effective substitute for a few major decisions. The report requires judgments on a long series of performance factors and personal traits, with the implication that each such factor and trait has equal weight. This is nonsense, of course. After all, it is a human being with particular traits in a unique combination that is being selected or passed over, assigned to this billet or that. Happily, the report’s fine-grain evaluation is usually ignored by both writers and readers of reports. Reporting seniors are generally under the influence of the “halo effect” so that marks on individual factors and traits reflect a general assessment of performance and character. For boards and assignment officers, the marks for factors and traits are of less significance than information from other parts of the report. Yet some damage may be done by conscientious reporting seniors when they mark one or two factors or traits lower than the rest without being able to indicate whether such a shortcoming is outweighed by an excess of some other quality that may or may not appear on the fitness report form. The mischief here is caused by confusing what reporting seniors ought to consider with what they ought to report.
By asking for marks on 47 specific factors and traits, the report serves notice to reporting seniors that they are not being trusted to consider and to weigh all the elements that contribute to an officer’s character or performance. Well, the committee that designed the report has not considered all the elements either. And whenever the list is lengthened by well-intentioned crusaders to require a mark about the latest “front burner” aspect of officer performance, the utility of the report is corrupted even more. For better or for worse, the officer being reported on is a whole, indivisible person. The fitness report provides the reporting senior an opportunity to cast his ballot for or against an officer, and he should be allowed to do so clearly and simply. .
The second basic criticism—that the report creates uncertainty by requiring reporting seniors to make extraneous judgments—is even more serious. To say that extraneous judgments are “required” needs some explanation. Most of the marks in the fitness report are intended to place an officer in a distribution pattern with his peers. To do this, a reporting seniol : must make three judgments, two of which are obvi- : ous, the third less so. The two obvious judgments 1 are these: How are these 47 qualities distributed : among lieutenants in the Navy? Where in these dis- I tributions does Lieutenant X fit?
Reporting seniors are presumed competent to 1 judge an officer’s performance, but it is questionabl£ ' how well they can judge the performance of all th« ' officer’s peers or how well they can compare perform- ^ ances and traits observed at different times. It is pet- 1 fectly true that selection boards and assignment offi' 1 cers must make such judgments, and perhaps i( ( seems only sporting to allow reporting seniors to '• have a try as well. But if a reporting senior real!) I does not know how all lieutenants in the Navy pet- * form a certain function, what can he do? He cat guess. But because such guesses will vary, a hiddet s uncertainty is created, not necessarily about Lieutefl' s ant X’s performance but about his reporting senior' | perception (guess) about how all these factors and 1 traits are distributed in lieutenants. But the mos1 c damaging uncertainty is created when the reportinf 1 senior tries to make a third judgment—his attemp1 r to “second-guess” the corresponding distribution1 f formed in the minds of all the other reporting senior* 1 in the Navy, knowing that the report he signs wP a be considered against the background of fitness tc v ports from a host of others. It is this guessing abou1 L the way that the other reporting seniors are guessini c that really invalidates the well-intentioned attemp to base fitness reporting on general comparisons.
To reduce some of the uncertainty, it would nd c be difficult to provide the readers of fitness repofc c with a clue about each reporting senior’s standard* r Each report on its arrival in the Bureau of Naval Pe( 1 sonnel (BuPers) could be given a number correspond 1 ing to the percentile of all previous reports signed b1 c that reporting senior on officers of the rank of th1 c officer being reported on. The computer has a mof reliable memory than he does.
In addition to the trivia and the guesswork P volved in writing fitness reports, there are pressure to inflate the grades and no incentive for reportiw ^ seniors to resist. The pressures arise from the unde( standable desire to “protect” his people and indeed 1 “reward” them for their efforts under his commao1 f In any commercial enterprise, he would have at lea* an annual opportunity to influence a salary review. I j the Navy, the fitness report is the nearest equivale'1 opportunity. But when a reporting senior uses a f'1 ^ ness report as a reward, it may lose its validity asi evaluation. Unfortunately, there is no incentive ^ ^ him to follow the instructions and guidelmf
liot
ivi-
nts
ted
iis-
to
tit
the
■nr
jef'
m
it
to
ally
jet-
cat
def
eff"
or’!
anO
lOS1
in| np1 ion! ioff wil
re
)onl
sin!
mp:
no1 ,oto rds Pef jnA J b! th nof
iff
uff
tiff!
def
d*
[eff* r. P ilef
fi
lS ff'
: fc inc
strictly, to give an average officer an average grade, 0r example. To do so, he assumes, would deny the officer any opportunity for promotion. It is even possible to “fudge” the relatively useful mission contri- utlon summary where, in effect, the reporting senior is asked to sketch a current sample of his distribution curve. According to the instructions, the evaluation is supposed to be made in comparison all of the officer’s peers that the reporting senior as known, but the summary tabulates only those reports submitted at the same time as the one being Wr|tten. Unless reports are being checked to ensure c°rnpliance with the instructions on the summary, and reporting seniors know this, they could report a Population that would be unduly favorable to the of- lcer being reported on.
What might be done to overcome these three 0rtc°mings? Should BuPers issue even more inductions anyway? Two examples of obvious error ave persisted for years. On the basic yardstick that s t0 be used almost throughout the report, there are ^W° matks, “E” and “F” that enjoy the same mean- and the identical percentile value, but nowhere is ere any explanation why there are two opportuni- t‘es to assign the same mark. And the next point on e scale to the right of the identical marks is valued as the 30 percentile when the 70th percentile is ob- Vl°usly intended. But reporting seniors hardly ever Use these marks, so the errors go undetected and un-
COrtected.
Earlier, it was noted that the report asks too many n°r questions, not enough major ones. What Questions should be asked? Indeed, there is really th°ne cluest‘on rEat must be asked: Ho\y well is ls officer performing in his present job? While this estion does not exhaust the information that selec- °n boards and assignment officers would like to re- 1Ve from reporting seniors, the answer to the single sest'°n on present performance can at least provide basis for judgments by both selection boards n assignment officers.
a Action boards might additionally entertain the sWer to the question: How well would this officer r °rm at the next grade? We could expect the an- ar^r t0 correlate closely with his present perform- ra^e’ but the “next grade” question requires a sepa- j Judgment. (Perhaps honest answers would help ^ 'date the “Peter Principle” in the Navy.) lete rea^ cask of assignment officers is to fill bil- p S’ but they also strive to keep as many officers as fic ^ e cornPet‘t've E°r promotion and to assign of- Vjj s °f the highest potential to billets that will pro- assf t^em maximum useful experience. To assist 'finrnent officers in identifying officers of the highest potential, reporting seniors might be asked a third question: What is the highest grade in which this officer is likely to be effective? There is no such question now; the answer must currently be inferred from the general tone of the comments in the report.
Finally, assignments of any importance are usually to positions that the officer has never before served in. There is no past track record, so assignment officers might appreciate some estimate of how an officer could be expected to perform in a few important billets. Yet this is probably less useful than it would appear, because assignment decisions are driven more by the need to fill the billet or to provide the officer with essential experience than by the fact that he might perform well. But assignment officers must be alert for any indications that an officer is thought to be clearly unsuitable for a certain category of positions—command at sea, for example. Therefore, it is reasonable to ask additional questions about how an officer would probably perform in a few critical billet types, or better still, to identify only those billets in which he could be expected to perform poorly. (Yet if the grades in the report are being combined to form any sort of simple score, it may not be fair to include such estimates in the averaging.)
Although it is a simple, relatively straightforward problem to identify the few questions that ought to be asked of reporting seniors, a harder problem remains—what standards and what scales should be used for marking? The framers of the present report form have not really solved this crucial problem.
The officer being graded must be evaluated against some reasonable, fair standard, but which one? Without perhaps exhausting all the possibilities, at least the following standards are available:
► All the officer’s peers in the Navy
► All the officer’s peers that the reporting senior has known (the present standard)
► All the officer’s peers currently being graded (the standard for the comparison summary)
► All the officers (of any rank) concurrently being graded
► The reporting senior’s own performance at the rank of the officer being graded
The first two standards listed share the drawbacks mentioned earlier. They are fair but uncertain. The third standard (peers then being reported on) is much more certain but could be unfair if the number of comparisons were with only one or two other officers and would be meaningless if there were no other peers at all.
The fourth possible standard (all officers then being reported on) appears at first to be grossly unfair because it usually involves a comparison of offi-
,nnw> w" °^!i------in==*^ .
Snton-s
ESSg^^S^*' tLL «M« Of «'‘5”
SiTSS.-•••”"“ vmtumaJ »£„'1e«ve» «J1;o«'S
"iwui S«f Slitie; gfltAuFK» “E S0.? reSu
nuBK^ttfe*
g-^s.
|?3asa,
„Ff, FOBCtFUL.
:.. TO T>ur*;_
. T /’CWf
iu snooTH, Vurviei*- rJu PH^es OF «r
‘ S”"'' '“tf "cm
ESElSs^iiSlS
th;
diffi
fences will take the good judgment of selection
cers of different ranks. Certainly it is unfair to expect an ensign to perform as effectively as a lieutenant or a lieutenant as well as a commander. How then is it possible to compare the performance of officers of different ranks? The answer lies in the presumptions embedded in the question. We expect the commander to perform better than the lieutenant, and We can design a grading system that takes this expectation into account. In effect, it is possible to c°mpare officers of different ranks by observing how fheir relative performance differs from their relative seniority. If We were to compare the performance of 1Ve officers whose ranking in performance matched their seniority, we would be justified in presuming that within this particular sample the performance of ®ach could be called average. But if their relative per- otmance was observed to be just the opposite of their seniority, then we might say that the performance of the senior officer was well below average, that °h the second senior was below average, that of third senmr was average, that of the fourth senior was a °Ve average, and that of the junior officer was well a ove average. Mechanics of such a system are sim- Pfe- all the officers being graded can be divided into a number of seniority groups, probably four or five.
hen the reporting senior would rank.all officers ac- c°rding to the characteristic being graded (present Perft>rmance, expected performance in the next Stade) and divides them into performance groups of t e same size as the seniority groups. Officers whose Performance group corresponds to their seniority 8foup are marked average, but when they are in a Performance group that is higher or lower than their Seniority group, they would be marked above average °r below average. (The use of four groups provides a Seven-point scale, the use of five groups, a nine-point Scale.) Thus, without regard to an uncertain general ^andard, a reporting senior could focus on how well ls officers performed with respect to each other. nd whenever he graded someone “up,” he would aVe t0 grade someone “down.” This might be hard to do, but it would avoid the chronic inflation that exists now.
The real objection to a grading system in which e standards for an officer’s performance are estab- 'shed entirely by the performance of his shipmates is 1 at an officer might be penalized if he found himself anaong a group of particularly high performers. Certainly Some wardrooms are better than others, and ^1Ucfear submarines may have more top performers an a naval district staff. To weight such possible boards and assignment officers.
It might be found that it is too complicated to compare the probable performances in the next higher ranks of officers of different grades. If so, a simple way to ask reporting seniors about the relative promotability of officers of different grades is to ask them to list their officers in the order in which they should be promoted to the next rank.
The final standard listed earlier (the reporting senior’s own performance at the rank of the officer being graded) would give the reporting senior a reason to begin every evaluation with his pencil at the middle of the scale (“About the same”) instead of at the highest end as is done so often now. Such a standard suffers the drawback of being invalid when the reporting senior and the subject of the report are from different competitive categories, but generally the perspective of one’s own performance at a comparable grade is likely to produce more of a bell-shaped curve than we see now.
Assessing the performance and potential of his officers is one of a commanding officer’s most demanding responsibilities. The instrument that is put at his disposal should be both effective and fair, and when used to evaluate so many officers who can walk on water, it probably ought to be able to detect which of them sometimes get their feet wet.
The present rubber yardstick with its uncertain scale produces a skewed distribution and ought to be improved, but it is probably unrealistic to expect senior officers to find much fault with a report form whose use in their own case has resulted in selections for promotion and for positions of great responsibility. Nevertheless, such senior officers should carefully reexamine this reporting form because, for better or worse, it is being relied on to support key personnel decisions and to shape the future Navy.
a A member of the Naval Academy Class of 1948, Captain Snyder served in nine ships ranging from minesweepers to aircraft carriers. At sea he was twice in command, twice an executive officer, and was five times a shipboard communication officer. He also served for ten years in naval communications planning and operations assignments on major staffs ashore, including the Office of The Secretary of Defense where he was military assistant to the Assistant Secretary of Defense (Telecommunications). He earned a master’s degree in personnel administration from Stanford University and served as a placement officer in the Bureau of Naval Personnel and a manpower planner on the staff of the Chief of Naval Operations. Captain Snyder retired from active duty in July 1976, and since October of that year he has been Deputy Executive Director of the National Research Council’s Committee on Telecommunications-Computer Applications. He lives in McLean, Virginia.