Building in layers of safety and sharpening the warfighting edge does not necessarily mean using technology more, but rather using it more effectively. Deftly applied automation can buy back time and cognitive resources for operators, decreasing the chances of human error, but technology also has the potential to become less of a tool and more of a crutch if operational fundamentals and basic seafaring skills are forsaken to automation. Operators must be able to rely on their own “sea sense,” developed through experience and mentoring, and use technology to accomplish specific objectives rather than defer to automation as the default decision-maker. Maintaining the competitive warfighting edge requires cultivating skilled mariners who know how to fight a well-equipped ship. Technology alone cannot make the ship safe, but when the operator lacks fundamental knowledge and experience, it can make the ship unsafe.
The Navy mission is complex because of the nature of operations and the extreme environments and conditions in which work is performed. High-stakes, high-demand, high-tempo operations in a challenging maritime environment provide various opportunities for errors and undesirable outcomes, which can manifest themselves in many different ways—but the driving forces behind these incidents are rarely unique. We can think of error as a consequence—not a cause. This is a major reform in the conceptualization and treatment of human error in naval operations. Error is a result of some causal factor, or more likely factors, which impede human performance. These could be environmental, such as fog that degraded visual perception or a noisy workspace that muddled communication; cultural, such as informal watchstanding standards or a lack of regard for crew fatigue; or systems related, such as poor ergonomics or ineffective implementation of technology. A confluence of factors produces an error, which in turn may have consequences in the form of a near miss event or major mishap.
In a well-defined and well-guarded system, most errors are likely to have a short-lived impact, though the outcome may be to some consequence. However, as the margin of safety degrades—due to factors such as ineffective operational fundamentals or poor implementation of technology—the results can be disastrous. This may not be evident immediately, which can create a false sense of security while unknowingly adding risk.
The fundamentals of seafaring have not changed per se, but they have become blanketed under layers of technology. Lost experiential knowledge in areas of operational fundamentals and basic seafaring is compensated for with increased technology. However, this adds risk, as junior personnel cannot engage in recognition-primed decision making and lack the ability to act intuitively. There is not enough time in each job to “experience experience.” This contributes to an over-reliance on automation or complex technology at the expense of disregarding one’s own instincts and decision-making brain. It has been said that “Machines have many qualities, but common sense isn’t one of them.” The inherent limitations on junior personnel developing their own sea sense bounds their ability to operate independently from automation and fosters a dependence on technology in the absence of basic seafaring fundamentals.
Contributing to this is limited mentoring. Senior leaders are constrained by increasing demands on their time and energy by managerial functions, limiting opportunities to transfer experiential knowledge by mentoring outside of formal training settings. Restoring knowledge and empowerment to junior sailors will require restoring discretionary time to senior leaders for experience sharing. Without being shepherded into a community of practice by senior leaders, junior personnel are restricted in their development of a questioning attitude and the ability to provide forceful backup to maintain a margin of safety.
In hindsight, the limited development of experiential knowledge and sea sense was evident in watchstander actions leading up to the collision between the USS John S. McCain (DDG-56) and Motor Vessel Alnic MC. When technology failed to conform to the expectations of the bridge team, watchstanders lacked the fundamental knowledge to understand the forces acting upon the ship. Moreover, as a result of gaps in training and procedures for bridge watchstanders operating new consoles, operators struggled with navigation and ship control because of what they perceived to be unreliable, unfamiliar automation technology.
There is a general inclination to automate anything that creates an economic benefit or gives a technological edge, and leave the operator to manage the resulting system. However, for automation to provide a benefit to the operator and the macro-organizational system as a whole, it must be applied with discretion. In many cases, human operators remain better than automation at responding to changing or unforeseen conditions. It is important that automation be designed and applied in such a way as to prevent the human operator from losing these capabilities because of technology overload. Additionally, unreliable automation can add negative value, in that it not only adds “cognitive overhead” with regard to distinguishing false alarms, but can also result in automation disuse (due to distrust). This results in operators defaulting to manual modes or overrides in a system that designers optimized for automation use; essentially, operators are at a greater deficit than had the automation not been implemented in the first place.
The best-intentioned technological and automated systems provide flexibility to operators by increasing the number of functions and options for carrying out a given task in various conditions. However, this flexibility comes at a cost. Because the operator must make a decision regarding which mode is best suited for particular circumstances, he or she must have more knowledge and awareness about the intricacies of the system and how to interface with it. In addition, the operator must allocate attention and monitoring resources to track what mode the automation is in at any given time and maintain an understanding of what underlying processes and capabilities that particular mode entails. Put more simply, this involves tracking what the automation is doing, why it is doing it, and what it will do next.
The inherent flexibility touted as a benefit of automation is what drives the demand for mode awareness–the operator’s ability to notice, perceive, track, and anticipate the behavior of the automated system. Mode awareness can be impeded by other design changes that leave the operator increasingly removed from mechanical aspects of the system, as previously available cues about the system behavior—such as moving throttles, vibration, or engine noise—may have been reduced or removed in the design process. Limiting auditory, visual, and kinesthetic cues indicating system status can aggravate the already difficult problem of maintaining mode awareness and may result in automation surprise—an impression that the system acts independent of operator intent. Automation surprise is tied to gaps or misconceptions in the operator’s mental model that may prevent him or her from tracking current mode and understanding when each mode is appropriate for given conditions, and also having a routine to verify that the system is operating as designed and intended. A well-developed and informed mental model, on the other hand, will support an impression of system behavior as deterministic and transparent with regard to automation capabilities, behavior, and mode state.
Essentially, mode awareness failures have two primary drivers—inadequate mental models and ambiguous indications of the status and behavior of automation. Insufficient mental models result from the failure of designers to anticipate the new knowledge demands associated with automation implementation and to provide mechanisms via training or tactical aids to acquire, maintain, and operationalize the requisite knowledge. Further, training rarely provides opportunities for operators to explore and experiment with the various modes in the process of learning how the systems work and how to work the systems. The challenge of opaque, low-transparency user interfaces reflects a failure of designers to support the operator’s cognitively demanding task of tracking the potentially dynamic state and behavior of the automation.
Operational risk management that fails to address human-machine interaction and other human performance factors results in a myopic view of potential hazards and human error traps. Luck becomes as important as skill in driving expected outcomes without a full understanding of risk gained through holistic assessment of technical, tactical, operational, and behavioral fundamentals.
Given the insights derived from recent mishaps and evolving perspectives rooted in science, it is critical that automation is applied with discretion and a consideration of second and third order effects. In some cases, an insular approach to deciding which systems to automate and a poorly executed implementation of technology are creating avoidable vulnerability. The Navy needs to take advantage of its technological edge and harness the art of the possible, but not at the expense of perishable skills or unwarranted cognitive overhead for operators. A thoughtful implementation of technology can buy down risk and increase the margin to safety, a lesson learned at a high cost that should not be forgotten nor repeated.
Dr. Culley is the force improvement/operational safety engineer at Submarine Force Headquarters, Atlantic. She has participated in major mishap investigations across Navy communities, including two comprehensive reviews.
Captain Harkins is the director of operational safety at Submarine Force Headquarters, Atlantic. Previously, he commanded the USS Montpelier (SSN-765) and USS Hartford (SSN-768), and most recently Submarine Squadron 20.