2.1 Introduction

The whole gamut of factors that contribute to the success of an interface is difficult to describe within a single book, but the operator gives us a central focus. Just like any other component (e.g., electrical system, communications networks), the operator has safe operating conditions, expected error rates, and predictable performance, albeit with a more variable range for the associated metrics. However, analyzing the operator ’s capabilities, like any other component in a large system, helps developers create reliable, effective systems that mitigate risks of system failure due to human error in integrated human–machine systems (e.g., air traffic control). We identify some of the most significant factors that can affect operator performance and show how they can be used by engineers during their design of an interface . For a more comprehensive review, we recommend (a) Foundations for Designing User-Centered Systems: What System Designers Need to Know about People (Ritter et al. 2014) and (b) Designing for Situation Awareness: An Approach to User-Centered Design (Endsley et al. 2003b).

This book offers design guidelines for optimizing the performance of the human component of the operation centers for asynchronous, autonomous systems. Figure 1.1 shows examples of the systems we are talking about like UAVS and satellites. User-centered design (UCD) provides the foundation for this task through basic tenets of its design philosophy. Designers can achieve UCD by designing for situation awareness (SA, explained below) in operators. Guidelines developed in these chapters will provide concise takeaways, while selected information on related cognitive mechanisms will provide context.

Thus, this chapter will follow this logic. First, we describe the tenets of UCD . These provide high-level questions that engineers can apply to their system at any point in the design process. Next, the connection between operator performance and SA is explained. Performance levels of SA correspond with cognitive mechanisms used to perform a task . The final section describes the cognitive mechanisms and their influences and offers design guidelines for ensuring compatibility between user capabilities and system interface .

2.2 User-Centered Design

The operator is a component of the system just like the sensors or underlying code. High-performance systems will incorporate operator capabilities into their design. This requires creating a system that follows principles of user-centered design. Though UCD is often associated with user experience, Endsley et al. (2003b, p. 5) explain the difference between UCD and UX in underlying philosophy as follows:

User-centered design challenges designers to mold the interface around the capabilities and needs of the operators. Rather than displaying information that is centered around the sensors and technologies that produce it, a user-centered design integrates this information in ways that fit the goals, tasks, and needs of the users. This philosophy is not borne primarily from a humanistic or altruistic desire, but rather from a desire to obtain optimal functioning of the overall human-machine system.

The three primary tenets of UCD , shown in Table 2.1, describe the high-level goals of UCD . Each tenet is expanded over the next few pages alongside some explanation and examples.

Table 2.1 The central tenets of user-centered design as summarized by Endsley et al. (2003b, pp. 8–9)

To illustrate these tenets, consider driving as an example. Figure 2.1 shows a car’s dashboard. With respect to Tenet 1, what are the primary and secondary goals of the user when using this interface ? The design should reflect the importance of each goal . While operating a vehicle, the primary goal is to arrive safely at the location; however, minimizing travel time is a salient secondary goal . Consider how the dashboard shown in Fig. 2.1 matches the goals, tasks, and abilities of a typical operator (or driver). The speedometer is large, detailed, and centrally located, which supports the operator ’s ability to quickly check vehicle speed, even during highway driving. This is the primary gauge that will be used while in motion, and thus is the most prominent feature in the display. The large tachometer provides instant feedback for operator input to the system, but with less detail than the speedometer. Broad markings and the red line provide simple indicators of system state. Engine temperature and fuel gauges are small and minimally detailed, with red lines indicating when direct action needs to be taken. The simple design suits their relatively infrequent use and their information complexity needs.

Fig. 2.1
figure 1

Image of a basic automobile dashboard. The full dashboard shows four gauges from left to right: tachometer, speedometer, fuel level, and temperature. From www.freeimages.com

What are the primary and secondary tasks that a user will perform on this interface ? The design should reflect the importance of each task . While driving, the primary task for this interface is checking the speed. The secondary task is monitoring the overall state of the vehicle. The speedometer has detailed markings to approximately match speed limits (10 km/h increments). The tachometer only provides broad details and a red line indicating an “unsafe state,” matching the detail that a user requires for monitoring the state.

With respect to the second tenet of UCD , the information in Fig. 2.1 makes the vehicle speed easy to perceive, interpret, and act upon. The other information for less important tasks is given less room. Where exact numbers are needed, such as miles traveled, this is provided as a number.

Would a typical user be able to understand this system? Users and designers often have different skill levels and familiarity with the system. In the case of a car, the average driver is not a mechanic, so they often do not need detailed information on most subsystems. An indicator light to check your engine may provide sufficient detail for a layperson who gets minimal value from additional details. Thus, Fig. 2.1 shows Tenet 2 in practice for the dashboard of a car. For the average driver, the check engine light provides only the necessary information to solve further problems and nothing more.

With respect to Tenet 3, relevant information is provided to control the system. In this case, a user working through sequential information on a display expects the next area of focus to be on a path from left to right, top to bottom (as when reading). For the state of a car, the water temperature and gas tank level are suitably ordered. More complex interfaces may require a different order, and power plant control rooms often order the displays based on their location in the plant.

In Fig. 2.1, if other information unrelated to driving the car was presented, such as distance from home, type of fuel in the tank, or brand of tire, the driver’s ability to drive would be less well supported. If the prominence and organization did not match the driver’s visual ability, for example, a less clear (or smaller) font, or dials presented in a different order, then the driver’s performance could suffer. Finally, if the state of the car were less visible, or less appropriately matched to the frequency and importance of goals, performance would suffer.

These tenets are not perfect, however, and do not always give clear guidance. Consider the display in Fig. 2.2. Here, the tenets do not provide direct guidance. The choice between these two designs must be based on the details of the goals and task priorities. If these are not known, they must be obtained from stakeholders (in the best case) or guessed or inferred (in the worst case).

Fig. 2.2
figure 2

Two ways to present display of an automated target identifier. Each design has trade-offs in operator performance that must be weighed based on the goals and priorities of the system. Image redrawn and modified by authors . Based on a figure from Banbury et al. (1998, p. 37)

Together, the three tenets of UCD provide a foundation for how to frame the system design process around the goals, tasks, and abilities of the operators. The various other elements within a complex system have their own design philosophies or guidelines (e.g., modular design, minimal complexity, easy replacement of components). The human–system interface is no different. The tenets of UCD provide an underlying set of principles that should shape the design process for creating complex systems.

Implementing UCD within complex systems requires a method for understanding and assessing operator performance during complex work. Endsley’s (1995) theory of situation awareness fills this need by providing a framework for understanding performance and decision making. Describing the SA of an operator means describing the product of relevant cognitive mechanisms that are necessary to perform complex work like decision making and troubleshooting within an operation center.

2.3 Situation Awareness: The Key to UCD

Human operators using complex systems must be able to correctly perceive useful information while ignoring or disregarding other stimuli. Situation awareness (SA) provides a framework for describing human performance on tasks ranging from driving an automobile to monitoring incoming cyberattacks. At a basic level, an operator demonstrating perfect SA knows which information around them is task-relevant, what this information means for the present, and what this information will mean for the future. With these types of knowledge, the operator understands the current state and can effectively project their understanding into possible future states of the system.

Describing an operator ’s SA performance uses three iterative stages. Though specific performance benchmarks denoting each stage are derived from the tasks, the three stages of SA are typically known as (a) perception, (b) comprehension, and (c) projection. These are illustrated in Fig. 2.3. First, an operator must perceive the useful information from the task environment. Second, they integrate individual cues into a useful mental model of the current situation. Third, they use their model of the situation to predict likely outcomes based on their comprehension of the scenario. Figure 2.3 uses operation of an automobile to explain the types of information associated with each stage.

Fig. 2.3
figure 3

The three stages of SA applied to task of operating a car. Figure redrawn and modified by authors. Based on a figure from Bolstad et al. (2010, p. 4)

Thus, operator performance can be improved through incorporating the tenets of UCD in system design. Improving the UCD of a system requires improving the SA of operators using the human–system interface . The system design will impact how well operators can develop and maintain SA during work. Interface design will affect how quickly and easily operators can advance to each subsequent stage of SA performance and how accurate and complete the operator ’s understanding is at each stage. Similar to shifting gears in a manual car to increase speed, the stages of SA progress on a continuous scale where competency with lower levels of SA is required to advance to the next stage.

The stages of SA provide a framework for assessing performance and identifying task and interface factors that can moderate SA performance. Progression through stages of SA will be impacted by operator characteristics (e.g., fatigue, personal capabilities), environmental effects (e.g., distractions), and task-related factors (e.g., cognitive resources required, task types, complexity ; Boff and Lincoln 1988). Each stage requires significantly more resources (e.g., knowledge, information, time) than the previous. Stage 3 SA should not be expected as the norm for every operator or every task ; however, it is the most useful.

Next, we describe the stages of SA in more detail and provide principles for design based on using SA as a metaphor for work in op centers. These principles are derived from Endsley et al. (2003b) and are applied by us to apply SA to the design of op centers. We include motivating examples for each stage. Tasks surrounding aviation were the original focus of SA research before it expanded to include a variety of complex tasks. During this discussion, we will describe the frequency of aviation disasters caused by critical errors in each stage of SA. These error rates refer to errors in common aviation tasks for pilots, air traffic controllers, and other aviation-related jobs, but it would be reasonable to assume that similar results would be found across a variety of op centers.

2.3.1 Stage 1: Perception

Perception is the most fundamental aspect of SA. During the common tasks within an op center, operators are likely bombarded with information. In most cases, space and cost in op centers will be at a premium, leading to operators with varied tasks across multiple displays. Each of these displays could be presenting tens or hundreds of data points, graphs, or other useful features, meaning that a major component in skilled performance could be simply knowing where to look and when.

The situation and signal content can determine the best course of action regarding how and when to respond to a signal (if at all). Operators with Stage 1 SA will demonstrate the ability to detect important signals while discarding irrelevant ones. Given perception’s fundamental role in an operator ’s work, it is unsurprising that perceptual issues account for about 75% of errors in common SA work (Jones and Endsley 1996). Causes of Stage 1 errors may be attributed primarily to human failures (e.g., attentional failure, misinterpretation of a signal), system failures (unclear or missing information), or some combination of the human and system failure.

Some design principles related to Stage 1 SA are shown in Table 2.2. The principles can be summed up as follows: task-relevant information should be readily available, easily interpretable, appropriately prominent, and simple enough for the typical user .

Table 2.2 Design principles related to Stage 1 SA

For example, in the WDS (introduced in Chap. 1 and explained in detail in Appendix 1), a display can indicate that the battery will be unable to charge at the rover’s current position and the rover will need to relocate. The interface must clearly convey this information for the operator so they can instigate a “move” command before the battery is too low. The interface should provide clear signals of the system state like a commonly used alarm icon (available) with a text description (interpretable) that flashes (appropriate salience) until the operator schedules the appropriate command (simple). While it is somewhat common practice to rely on unlabeled “self-explanatory” icons (i.e., for alarms), designers concerned about reducing risks of confusion, and errors will support the visual design with liberal use of textual labels. Words in interfaces are often underused but are more easily interpreted than symbols when used alone (Chilton 1996).

The principles in Table 2.2 provide a framework for ensuring the interface can effectively convey useful information in a manner that is useful to the operator . This means ensuring that the value and salience of each piece of information is appropriate, actively drawing attention to important signals, and minimizing the quantity and salience of extraneous stimuli. The second principle in this area is to make the information interpretable by using intuitive, sensible designs. The third principle extends the first two by promoting a hierarchy of signal importance to ensure that the signals perceived by the operator are the most useful at any given time (or at least that non-useful signals are relatively muted). The fourth principle deals with the inherent limits to human cognition. While these limits tend to be loosely calculated, designers can follow this guideline by working broadly to reduce complexity across the system whenever possible.

As an example, reconsider the car dashboard shown in Fig. 2.1. Several design features facilitate Stage 1 SA during typical operation of the vehicle. Compare the prominence of the speedometer and tachometer to the temperature and gas gauges (Principles 2.1, 2.2). Operators likely update their mental model of speed and engine performance every few seconds, but only check the temperature and fuel levels if something is going wrong (Principle 2.3). Taken together, this design takes steps to limit or reduce the availability of unnecessary or distracting information (Principle 2.4). While the design of the dashboard could likely be improved, this example shows how simple design changes like changing size proportions can support Stage 1 SA.

The dashboard design also supports monitoring for infrequent, but critical, alerts like low fuel levels. The fuel level indicator provides two different signals when fuel reaches dangerously low levels. First, the fuel level gauge displays the current fuel level compared to a warning level. This allows the operator to quickly assess the current fuel level and determine whether action is needed (i.e., adding fuel). Even outside of warning situations, the operator can maintain suitable awareness of the fuel level and plan accordingly. If the operator fails to add fuel before reaching the warning level, the second alarm signal will trigger: the fuel level icon of a gas pump will glow yellow. This provides a second chance for the operator to respond to the situation if the first chance (fuel level indicator) fails, and only appears when fuel is dangerously low. Newer cars will even sound an alarm or, better yet, vocalize the alarm information. Altogether, the fuel level gauge supports Stage 1 SA by making the information available, salient, and appropriately designed to mitigate risks to system failure (i.e., running out of gas in the middle of nowhere).

For another example, consider the WDS introduced in Chap. 1. When below a certain power threshold , the dashboard interface displaying the battery information will continually flash a red symbol, indicating the risk of total power failure for the system. If this alert continues until the battery is charged, the signal will waste the operator ’s attention and cause unnecessary distraction . Why does the signal remain prominent, even after the solution has been implemented? Once the solution process begins, there is no need to draw attention to the signal until additional information is received. The signal’s visual appearance should be able to be muted until another update is needed.

This principle has further implications for the details of displays. It suggests eliminating or suppressing unnecessary signals and merging compatible signals. Simplify complex signals. For example, an interface showing the overall WDS status may include orientation, geographic information, battery level, and other information. These parameters are monitored by operators for unexpected changes; however, excessive details increase workload by increasing the amount of visual clutter. Designers should strive to optimize the complexity and detail when possible, which in many cases means reducing those factors. If you know operators only check the approximate orientation (i.e., NW, S), then that’s how orientation should primarily be displayed. And if the detailed heading information is still required to be shown for occasional use, then the salience of that information could be reduced (e.g., reduce text size, use muted colors for font).

The fourth principle in this area is to work with the limits of human cognition and perception. Human cognition has natural limits in how much it can process at once. Work around the limitations by reducing complexity and workload of the task .

For example, a status update for the WDS may include hundreds or thousands of events in a data log that accompanies the basic system status report. Reserving a space on the interface to indicate critical or alarming events (e.g., imminent power failure) while hiding data related to non-important (or typically non-important) updates will reduce the amount of information necessary for the operator to perform the most useful tasks.

As another example, consider a system that is rarely interacted with during normal operations. The interface simply provides a status that is checked hourly by an operator . This interface was initially expected to be part of a multiple-monitor display for a seated operator , but now it is checked while standing several feet back. Now the operator must lean in or squint to read and understand the information.

Consider physical aspects of how the operator uses the system. An operator sitting at a desk in front of the screen can effectively monitor more dense signals than someone 5 feet away. Ideally, the perceived details of an interface will smoothly transition as an operator views it from different distances.

While the people building these types of systems should typically avoid overly bold designs, there are still useful lessons to be learned regarding how aesthetics can affect operator performance. Books on visual design of interfaces can provide more information in this area (e.g., Kosslyn 2007; Tufte 2001, 2006).

2.3.2 Stage 2: Comprehension

The second stage of SA involves synthesizing Stage 1 cues into a useful mental model of the situation. A practiced operator will purposefully seek out patterns from various stimuli and form a holistic view of the situation based on their experience with the task and the information presented. Errors arising from comprehension failure account for about 20% of errors (Jones and Endsley 1996). Stage 2 errors are often attributed to misinterpretation of an information set, failure to maintain all the necessary information in working memory, misuse of a mental model, or overreliance on default settings (e.g., failing to check a status hidden behind a submenu). Some design principles related to Stage 2 SA are shown in Table 2.3.

Table 2.3 Design principles related to Stage 2 SA (Principles 2.5–2.6)

As an example of the first principle, the interface that provides the WDS status information may have a variety of information presented on it using textual and visual signals. Icons can help reduce text or provide a more grid-like design, but should only be used when the operator understands the meaning (so make sure that the operator understands the meaning through culture, training, pop-up names, or other means).

Similarly, familiar symbols should have familiar meanings. Using an “X”—particularly a red “X”—should typically indicate that something will “close,” “exit,” or “cancel.” Red and green follow cultural norms of stop/exit/bad and go/continue/good, respectively. The Apple Design GuidelinesFootnote 1 give an example set of such guidelines.

The second principle is to consider how the actual tasks will be done by the operators. Interruptions and task-switching are major sources of error. If task interruptions are common, designers should account for their effects in their task analyses for the system and seek to mitigate their negative effects on task performance. These design features can include the ability to postpone the next task so that the current task can be completed, or to remember the state of the suspended task until it can be returned to. Sometimes even non-digital solutions can work; in a control room, one solution could be to simply include a pad of paper fo r note-taking (Trafton et al. 2003).

As an example, operators may have to multitask while monitoring the WDS . The WDS status interface provides many different pieces of information, but the operator will typically not have any issues responding to routine events. However, once they need to respond to some new situation, they must split their attention between the normal monitoring and the new task . This could lead to the operator missing an important warning .

The system could support this task requirement and reduce risk by providing a simplistic view of critical information during times when the operator may be splitting attention across multiple tasks. When an operator pulls up a subsystem view alongside an overall status view, the overall status could become less detailed while increasing the salience of signals indicating new changes. Or alternatively, operators could be prompted to use simpler methods for tracking system state, such as a pad of paper or a sticky note on the screen, which could allow the operator to “save” the partial state information prior to dealing with an interruption .

Further information on how cognition is used to comprehend a situation is available in Endsley’s work (Endsley et al. 2003a, b) and other books on human–computer interaction (Krug 2005; Ritter et al. 2014).

2.3.3 Stage 3: Projection

The third stage of SA is achieved through projecting the model of the situation into possible future outcomes. For example, an air traffic controller could anticipate a dangerous situation based on how two aircraft are likely to maneuver while changing course and act to avert the future incidents. Though difficult, this type of expertise is essential for high performance in some complex tasks (Endsley 2000).

Stage 3 failures account for about 3% of errors in aviation, but the complexity of Stage 3 SA makes generalizable causes of error difficult to isolate. General causes may include overtaxation of mental resources, insufficient knowledge of the domain, or overprojecting current trends (Jones and Endsley 1996). This type of expertise is difficult to plan around for the engineers during the early design stages, and thus will be given less focus in this book. Obviously, systems that help predict the future of object or systems would help operators. For example, supporting Stage 3 SA could be as simple as including trend lines showing system state over time, or as complex as automated calibration of signal strength to predict upcoming alert states (Tufte 2006).

One of the most effective ways to design for Stage 3 SA is by eliminating barriers preventing Stage 1 and 2 SA from being effectively supported. Thus, designers are advised to focus on solving issues with perception and comprehension before specifically addressing methods for improving an operator ’s ability to project into future states. However, further information about supporting projection can be found in Endsley’s work (Endsley et al. 2003a, b) and work on mental models (Besnard et al. 2004; Kieras and Bovair 1984; Moray 1996; Ritter et al. 2014).

2.4 Summary: Cognitive Mechanisms for Situation Awareness

The three stages of SA provide a broad classification for the performance of operators during complex tasks. This chapter only briefly describes SA. This overview gives engineers the tools needed to consider how SA applies to the systems they design. In the next chapter, the cognitive mechanisms that drive operator performance are described and connected to SA.

This chapter briefly covers significant cognitive mechanisms used in SA as a way to describe and summarize them. These mechanisms and their role in SA get more comprehensive coverage in Chap. 3. We explain them here because these cognitive mechanisms can be simulated in a computer (Anderson 2007), but can also be simulated in the designer ’s head to make predictions about how operators use the system. Figure 2.4 shows these mechanisms as they are implemented in the ACT-R cognitive architecture (Ritter et al. 2014, Chap. 1). These components can be seen as distinct subsystems with semi-independent operations. To learn more about ACT-R , Ritter et al. (2018) review the state of research using ACT-R and other cognitive models.

Fig. 2.4
figure 4

A schematic of the components of a computational model (ACT-R) of the human operator . (Figure used with permission from Ritter et al. 2018; Fig. 3)

As shown in Fig. 2.3, the process of achieving situation awareness often starts with perception, the intake and processing of competing sensory cues (or signals) into usable information. In this approach, perception does not necessarily lead to detection of a signal or to understanding because the perceptual process requires attention from cognition. Attention, in this case, means that select information is targeted by the system. Cognition, the central process, directs focus on the task-relevant information while ignoring or not processing the rest. Attention is a limited resource that must be distributed across appropriate features. Attention is probably best seen as an active process of directing cognitive resources rather than a single buffer responsible for passing information.

Top-down attention is goal-directed towards some feature(s) based on the goal while avoiding focus on distracters (e.g., monitoring speed and position but ignoring billboards while driving). Bottom-up attention is driven by the common features that indicate activity (bright colors/lights, motion, and others).

Memory is used to perform the task , recruited from the declarative memory buffer or activated from long-term memory (in ACT-R, in the declarative buffer and the goal buffer), which might be called working memory (WM), which operates as the “RAM” for cognition by storing and manipulating information chunks for short periods. This stored information has to be maintained through use, manipulated, and stored in long-term memory, or it decays and is lost. Human memory is more similar to old drum or plated wire memory, which needed to be continually refreshed, than it is to current solid-state RAM, which can sit without use and without decay.

WM is more than just a singular “catchall” for temporary information storage. The current theory of working memory has established at least two major subsystems, the visuospatial sketch pad and the phonological loop, which exclusively hold visual and verbal information, respectively (Baddeley 2012). Each subsystem operates semi-independently to store and maintain information for near-term use. One benefit of these distinct storage types is an improved ability to multitask when we distribute the cognitive operation across multiple WM stores. Dual-task activities can be performed well if each task uses only, or mostly only, a singular WM store. For example, it will be easier to remember a set of numbers while observing a scene in a play than while solving math problems.

The operator ’s mental model is the operator ’s internal representation of an external situation. Their mental model provides the framework that they use to process information related to the task . This model is stored in memory, which means it can be learned, or partially forgotten, and might not match the designer ’s representation used to understand the system and to create the interface .

The operator ’s mental model of a situation provides the tools needed to handle large amounts of information. They use their experience from long-term memory to scaffold the intake of new information, noting what to pay attention to, what to discard, and what to remember for a given situation. Mental models also include what to do in a situation.

Thus, situation awareness, the awareness of the state of the world, what is happening, and what will happen, is based on an operator ’s mental model and its used by a set of mechanisms similar to what is in Fig. 2.4. This approach, when applied to op center design, suggests that each stage of the operator ’s processing and response is important for a successful system operation. The operator needs to be able to see and process the stimuli. They need to be able to have attention and time to understand them, and the ability to acknowledge that the stimuli are important. They need to have an appropriate mental model in which to relate new information to previous information and current goals. They need to know what to do, and how to respond. And they need the world’s state and a good mental model to predict what will happen in the world.

Situation awareness thus provides a way to organize a designer ’s model of the operator . It makes strong suggestions about design when combined with knowing the operator ’s capabilities, their tasks and task priorities, and their mental model of the world. This model accounts for both the long-term learning and mastery of the system and the ongoing and evolving model of what is happening at any point in time.

The next chapter explains these components in more detail to help a designer understand how an operator might run and apply their mental model.