Abstract
This chapter presents the concept of confidence horizon for cooperative vehicles. The confidence horizon is designed to let the automation predict its own and the human’s abilities to control the vehicle in the near future. Based on the pattern approach originating from Alexander et al. [1], the confidence horizon concept is instantiated with a pattern framework. In case of a necessary takeover of the driving task by the human, a mode transition pattern is initiated. In order to determine when the takeover is required, which pattern to start and when to omit the takeover attempt and directly start a minimum risk maneuver, the confidence horizon for both human and co-system is an important parameter. A visual representation of the confidence horizon for the driver in different scenarios prior to a takeover request was explored. Intermediate results of a simulator study are presented, which assess the confidence horizon in automation safety-critical takeover scenarios involving an intersection and a broken-down vehicle on a highway.
You have full access to this open access chapter, Download chapter PDF
Keywords
1 Cooperation Between Human, Co-system and Environment
Cooperation in automated driving is a bridging paradigm connecting many facets, e.g., cooperation between machines and machines, between humans and humans as well as between humans and machines. Cooperation does not necessarily need similarity among cooperation partners. Compatibility, however, is a crucial requirement. It needs to be sufficiently developed between the outer borders of the cooperating sub-systems (outer compatibility) and between the inner, often cognitive, aspects of the cooperating sub-systems (inner compatibility) [9], leading to outer and inner cooperation. In these complex systems, not only the humans and machines in the directly acting human-machine system should cooperate, but also the people and machines in the meta-system, e.g., in research and development.
The cooperation between multiple vehicles reflects the outer cooperation from the viewpoint of a single automated vehicle and is examined in various details in many other chapters of this book. The following chapter focuses on the cooperation of a single human with a single automation within a highly automated vehicle. Any cooperation with other vehicles, between these vehicles and with the ego vehicle itself are considered as part of the environment.
In general, there are three main entities within the system of the ego automated vehicle: The human, the co-system (including the automation and other technical subsystems), both of which are considered agents within the system, and the environment. As shown in Fig. 1, the human and co-system influence the environment through joint actions. To enable a joint action [25], the human and the co-system have to cooperate either through direct communication or through a mediator, which is represented by the center element of the diagram.
In this system model, it is assumed that both the human and the co-system may share the vehicle control and transition control between one another. The direct communication between the two agents is crucial for the co-system to communicate decisions made by a network of cooperating vehicles as well as possible actions needed by the human if the co-system reaches its limitations.
In order to successfully design human-machine cooperation, it is necessary to align the “mental model” of the co-system with the mental model of the human [9] to include the environment, and to keep it transparent and repeatable. One tool to achieve this is a design metaphor, which has been successfully applied e.g. in the form of the desktop-metaphor (as established by Alan Kay from Xerox PARC in 1970) or the H(orse)-metaphor [8], transferring the mental model of a rider and horse to the domain of highly automated vehicles. A more generalized approach is the pattern approach based on Alexander et al. [1], applied to music by Borchers [5], to software by Gamma et al. [16] and applied to human-machine systems by Baltzer [2], Herzberger et al. [20], López Hernández [22], and others. For more details on patterns see Flemisch et al. [15] and the chapter of Flemisch et al. [7].
2 The Concept of Confidence Horizons
The idea behind the confidence horizon concept is to bring together the prediction of the time points of when and until when the human and the automation are able to control the joint system, in this case an automated vehicle.
In this sense, the confidence horizon is coupled to the prediction of the ability to execute control over the joint system. Combining the predictions for human and automation makes clear when a safe transition of control between human and automation can be expected and how automation and human need to communicate, depending on the severity of the situation. Figure 2 depicts the confidence horizon concept.
As shown on the left, human and automation are more or less involved in the current driving task, depending on the current automation mode (e.g. manually, partially or highly automated), and the resulting distribution of control [10]. As stated in the SAE [24], starting from SAE Level 4 automation, the driver is explicitly allowed to disengage completely from the driving task, which results in a potential loss of situation awareness for the driving task, especially when engaging in a non-driving related task (NDRT) [28]. Even in lower automation levels (automation according to SAE Level 2), despite the driver’s obligation to be ready to intervene and ongoing liability for the vehicle’s actions, the driver may tend to lose awareness, a mechanism described as the unsafe valley of automation [11]. With the confidence horizon concept, we propose to make this unsafe valley visible at least to the automation and its developers, as an option also for the driver, so that she can act accordingly. The control distribution in Fig. 2 (left) shows, on the one hand, who has to control the vehicle in a given automation mode and, on the other hand, the ability of the human (in orange) and automation (in blue) to actually execute the vehicle control. Projecting the ability distribution for the human and the automation into a real situation (see Fig. 2, right) directly shows the need for a control transition due to a lack of ability of the automation to handle an obstacle in this situation. Furthermore, it shows the available time frame for a transition to the human, in which this transition has to take place (shown as safety buffer).
In a critical situation (system boundary or system failure), the confidence horizons clearly show a safety gap, i.e., a time frame in which neither automation nor human are able to control the driving related task. The confidence horizon concept enables the automation to detect such cases as early as possible and act accordingly. Depending on the time remaining until the system failure is reached and the current ability of the driver, the automation either triggers a diagnostic take-over request (TOR), in case that there is a safety buffer present before the system would fail, or a minimum risk maneuver (MRM).
We propose to use the confidence horizon concept for the design of highly automated human-machine systems to identify the proper transition strategy in case of an upcoming control gap and to predict the future ability of the human and the automation to control the joint system. However, based on our exploration results, we recommend using the confidence horizon as a basis for designing HMI designs in situations of varying criticality, including the communication strategy of the automation, rather than a simple visual representation of the confidence horizon as in Fig. 2 (right).
3 Application of the Pattern Approach to Cooperative Automated Driving
To achieve good cooperation between two agents, both need to understand each other. When designing human-machine cooperation, the challenge is to find a common language. A promising solution is the approach of interaction patterns to find common ground at large scale. Based on Alexander et al. [1], Flemisch et al. [14] describe a pattern as follows:
A pattern describes something that occurs over and over again. An example for this is a problem and/or its solutions. If this can be observed, and its core can be mapped and modelled, you can either observe and match the pattern over and over again, without ever making the identical observation twice. And/or you can instantiate and design with this pattern over and over again, not necessarily doing it the same way twice. Examples for this are designing, engineering and using of artefacts like human-machine systems. Flemisch et al. [14]
Alexander et al. [1], Borchers [5] and Baltzer [3] use patterns to describe a solution to a given problem and propose a pattern language for the design of patterns. Another focus is set by Flemisch [6] and López Hernández [22] on the structure of the solution by describing in detail the sequence of interaction within a pattern. This focus, however, further tailored to matching a given pattern instance for the case of cooperatively interacting vehicles, is also applied in the proposal of the authors.
When using the pattern approach for active cooperation, the pattern structure is extended by a set of properties to detect which cooperation partner should, wants and is currently performing a given pattern, resulting in a new subset of patterns: Cooperation patterns. In the proposed setup, all properties are predicted by the co-system. Each property can be described by a sub-pattern, so that if the sub-pattern matches the co-system, the activation value and confidence for the respecting property increases as well.
The fundamental properties of a cooperation pattern are utility, ability, intention and execution. Utility describes how useful the activation of the current pattern would be for the respecting agent. Ability represents the agent’s ability to execute the pattern now and in the near future. Intention describes the agent’s inner determination to execute the pattern, while the execution property describes the matching of the agent’s actual current action with the actions required to execute the pattern at hand.
Derived from the cooperation pattern, the relevant patterns are activity patterns and transition patterns. Applied to cooperative vehicles, there are driving related and non-driving related activities (see Fig. 3).
Both agents, the human as well as the automation, can focus on one of these activities. They can change their own focus and try to change the other’s focus by starting a transition pattern, e.g., a takeover request (TOR).
Figure 4 depicts the pattern network for the application in transition control for highly automated driving. It displays the same process as in Fig. 4 with the patterns as states and for each agent individually. On the most basic level, the activity of human and automation can be considered as driving related or non-driving related. Since activity patterns are derived from cooperation patterns, they contain their properties for the utility, ability, intention and execution of the activity by both agents according to the co-system’s prediction. The same applies to transition patterns.
The detection of the ability of both human and automation to execute the driving related ability directly reflects the current state of the confidence horizon. Transitions are used to switch from one activity to the other. Various transitions are available based on the initiator of the transition, the current size of the safety buffer in the confidence horizon and the predicted ability to execute the target activity for human and automation. It should be noted that, in the case of a transition, both human and automation have to change their activity. As part of the co-system, a mediator arbitrates conflicts between human and machine [4] and provides transparency of the automation’s behavior to maximize the overall utility of the human-machine system. This mediator makes all joint decisions. It is the mediator’s responsibility to let the co-system initiate a certain transition or prevent the human from using a transition that is not feasible for the system. Figure 4 illustrates the possible transitions between activity and transition patterns for both human and automation, assuming that each agent is focused on a single task at any given time. In this application, the automation can initiate a take-over request (TOR) that, if successful, leads to a change in activity for both agents, or be pushed into a minimum risk maneuver (MRM).
A combined representation of both diagrams of Fig. 4 is shown in Fig. 3, highlighting that all activities are considered states with the properties of utility, ability, intention and execution for each agent. Additionally, an agent is not limited to focus on a single activity, but rather uses transition patterns to change focus from one activity to another.
Applied to human-automation cooperation in cooperatively interacting vehicles, this could be implemented as follows (Fig. 5): The co-system detects a safety gap ahead and needs to transition the human activity from a non-driving related to the driving related task. This has to be done before the safety gap comes to close. Otherwise, the co-system has to initiate a minimum risk maneuver, which, however, might involve a higher risk than a successful take-over of the human. Figure 5 depicts this situation at time \(t_{1.1}\). If there is enough time to hand over control to the human, the co-system starts a two-stage take-over pattern (as based on e.g. Rhede et al. [23], Winkler et al. [27] or Guo et al. [17]) to let the driver gain situational awareness and enable the driver to take back control safely. Depending on the predicted ability of the driver, the first warning might be sufficient, or, the second warning stage has to be triggered, starting at \(t_{1.2}\). If the transition fails because the human is either unwilling or unable to take over in time, according to Herzberger et al. [19], the co-system starts another transition to the MRM and aborts the take-over transition, leading to \(t_{2.1}\). Only if the transition is successful, control is transferred to the human and the automation accordingly loses control over the driving related activity (\(t_{2.2}\)).
4 Exploration of the Confidence Horizon Cooperation Design
To explore the design options for the cooperation between human and co-system and in particular the HMI used in the use case of a breakdown vehicle, a Human Systems Exploration (as described by Flemisch et al. [13]) was conducted at the IAW Exploroscope.
The chosen use case was the appearance of a stopped vehicle on a three-lane highway in the center lane with traffic to the left lane. To avoid a collision with the vehicle in front, there are two possibilities: Either one breaks and stops in front of the vehicle, staying vulnerable to traffic from behind, or one changes to the right lane to avoid a collision. It is assumed that the automation is unable or not allowed to performFootnote 1 the evasive maneuver.
The setup consisted of two scenarios representing the safety buffer and safety gap cases in two different severity levels of time to collision (TTC) with \(TTC = 10\,\textrm{s}\) and \(TTC = 3\,\textrm{s}\), indicated by the distance between the ego and the breakdown vehicle.
A total of \(N = 12\) persons (\(41,67 \%\) female, \(58,33 \%\) male) with an average age of 30 (\(\sigma ^2 = 7,98\)) years participated in the exploration. Due to the Covid-19 restrictions in 2020, the exploration was conducted partly on-site (\(n = 5\) participants) and online (\(n = 7\) participants). A digital whiteboard tool was used for documentation in both cases.
Participants were shown all four resulting situations on a digital whiteboard with the confidence horizon markings (cf. Fig. 6, right) displayed for reference and asked to share their thoughts on how the co-system should communicate a take-over request to the human. They were given the task of drawing a sketch of their proposed head up display (HUD) concept.
As a first finding, it should be noted, that only one in 12 (\(8 \%\)) would display the confidence horizon (as in Fig. 6, right) directly to the driver. \(42 \%\) of participants would display the confidence horizon only for the ability of the co-system and under certain conditions. And \(50 \%\) would never display it to the driver, especially because predicting human capability is perceived as confusing or uncanny, and displaying information in an area where the co-system cannot control the vehicle is considered plausible. From these results, it is concluded that the confidence horizon might be a useful tool for cooperation design and to initiate transitions in foresight but should only be used with caution as a too detailed HMI element.
Participants also noted that the information displayed in the visual HMI should be limited to focus attention and that they prefer not to read text in a critical situation. \(33 \%\) indicated that a general warning message in the corners of the visible area would be useful. \(42 \%\) commented positively on the visualization of a lane change trajectory as well as the display of the center lane trajectory with changing colors indicating the criticality of the distance to the obstacle ahead. Figure 7 shows the proposal for the safety buffer scenarios combined from all the results collected. The participants wanted to be shown how much way they still have before the situation becomes too critical if they do not react. The left lane is shown as blocked and an arrow indicates the possible lane change to the right lane. An icon in the center of the field of view indicates necessary action. The broken-down vehicle is highlighted with a frame in warning color (red), annotated with the remaining distance in meters. In the corners of the field of view (might be realized as part of the HUD or ambient lighting), light flashing colors emphasize the possible and impossible directions.
The safety gap scenarios were not fully understood by most of the participants. The main reason for that was that it is difficult to understand why the co-system would provide information on the situation despite it itself failing in the very moment. This shows that it was unclear to the participants that situation awareness and ability to execute the driving task are separated in the case of the co-system. Most importantly, participants wanted transparency of the automation’s actions for both cases. For example, the co-system should inform the driver, that a minimum risk maneuver is being executed and that the driver may only take over control after the maneuver is completed.
5 Simulator Study of the Confidence Horizon Cooperation Design
To evaluate the proposed application of the confidence horizons, a study with \(N = 20\) participants was conducted in the static driving simulator at the IAW Exploroscope of the RWTH Aachen University. The study produced much more results than can be shown in the last part of this chapter, so that only an overview can be given, with more detailed publications to follow. The study tested three different designs in two different use cases. The use cases were:
Use case 1 “Avoidance of broken-down vehicle”, starting on the highway in SAE level 3/4, where drivers engaged in a non-driving related task had to take over control and avoid to the obstacle by changing from the center to the right lane, as the left lane is blocked by fast dense traffic.
Use case 2 “Avoidance of collision at X-intersection”, starting on a rural road in SAE level 3/4, where drivers engaged in a non-driving related task had to take over control and avoid a collision with a vehicle coming from the right.
Since the use cases are already very detailed here, they could be considered as use situations. In order to maintain the conceptual connection to the other chapters, we will nevertheless continue to refer to use cases here.
Each participant experienced both use cases and one of the three cooperation designs:
Design 1 is the baseline: Here, the driver only receives an acoustic takeover request from the automation combined with an immediate dropout/ deactivation of the automated system.
Design 2 is a combination of the first design with an MRM (Minimum Risk Maneuver). If the driver does not intervene after the drop out, emergency braking is automatically initiated.
Design 3 is a more complex attention sensitive design that combines the ideas of the confidence horizon: On the one hand, the driver’s ability to take over is determined by his orientation reaction, as proposed in the diagnostic TOR approach [19]. On the other hand, the capabilities of the automation are derived from the tested use cases. If the driver is classified as not ready to take over, a second warning stage is initiated. Here, depending on the human’s reaction to the TOR, her or his ability to execute the driving task, and the time remaining before the accident, the interaction mediator decided to either immediately return control to the automation, wait until the human was ready to take over, or immediately transfer control to the human. Thus, the time advantage resulting from the detection of the readiness to take over (see chapter by Herzberger et al. [18]) is used to either trigger a second warning, with a still possible strong MRM, or an early and comfortable MRM. As in design 1 and 2, the driver in design 3 receives a TOR that is combined with visual warnings, based on the results from the exploration (see Fig. 8), in the HUD.
The photo at the bottom of Fig. 8 shows the HMI from design 3 in the highway use case with the broken-down vehicle. Here, the left lane, which is occupied by fast moving traffic, is covered by a semi-transparent red wall. In addition, a hands-on symbol is displayed above the road, along with the text “please take over” (in German). Starting from the ego-vehicle, a possible safe trajectory to the right lane is suggested by a green turn arrow. The clear right lane is also indicated by a green check mark at the bottom right of the windshield. In both designs with MRM (design 2 and design 3), the emergency braking can be overridden and it does not start until it is detected that the driver is not responding to the TOR. Figure 9 shows a tree or state-transition diagram of the three designs.
\(N = 20\) subjects participated in the study (\(45 \%\) female). The age of the participants ranged from 18 to 54 years (\(M = 28.90\) years, \(SD = 12.57\) years). The results of the Karolinska Sleepiness Scale (KSS) as well as the Sofi scale, which measure the fatigue of test subjects, did not differ significantly between the takeover design groups. Subjects were randomly assigned to the use cases intersection and highway and to the designs, resulting in each subject experiencing one design and both use cases. The distribution of subjects was carefully balanced so that, as far as possible, there were an equal number of subjects in each design and in each possible use case sequence combination. \(n = 6\) were assigned to design 1, \(n = 7\) to design 2 and \(n = 7\) to design 3. All subjects experienced each use case twice. The first use case trial is referred to as \(t_1\) and the second trail as \(t_2\).
6 Results and Discussion
The evaluation was carried out in accordance with the principle of balanced analysis, which combines and balances subjective with objective, quantitative with qualitative, individual with averaged, and time-longitudinal with time lateral perspectives (see Fig. 10, e.g. Flemisch et al. [12]).
The subjective data are further subdivided into results from the closed and open questions (quantitative vs. qualitative). An extraction of the objective results is shown in Table 1. Here, the takeover success by design and use case is presented.
Not surprisingly, the results reveal that across all designs and situations, subjects took over more successfully at \(t_2\) than at \(t_1\). Contrary to the hypothesis that subjects in design 3 were fundamentally more successful in taking over the driving task than in designs 1 and 2, it appeared that design 3 performed better than design 1 only in the intersection use case. In the highway use case, however, the results were inverse, indicating an effect of the cooperation design, or of the experimental design. However, these influencing effects need to be investigated in more detail to avoid potential side effects of the more complex attention-sensitive design, and realize the true potential of the concepts, already seen in the results in one of the two use cases, in the future for all use cases.
Analysis of data related to driver ability in both use cases and all designs was conducted based on aggregated data sets, as shown exemplary in Fig. 11. The data set consists of gaze AOI (area of interest) data, grip force on the steering wheel, steering angle, pedal activation and seat and seat back pressure. Data sets were evaluated to find a most universal pattern, which describes the ability or inability of the human driver to takeover control after the TOR was issued.
Regarding the ability of the driver, results indicate a possible detection of the inability to take over. Gaze behavior shows, that only \(11.7\%\) of successful drivers did take a look at any mirror more than once and tend to have a stable gaze on the road, which tends to lead to a successful takeover, however, it does not guarantee it.
While the initial driver gaze gives a hint on the early orientation behavior of drivers, its analysis also leads to the conclusion, that a successful takeover is not describable by driver gaze alone, hence more data points (c.f. Fig. 11) were added to the analysis.
The combination of gaze, grip force and driver input (pedals and/or steering wheel) leads to a first model of a pattern for the successfulFootnote 2 control transition to the driver after the TOR was issued by the automation. Figure 12 displays the successful (Fig. 12 top) and unsuccessful (Fig. 12 bottom) pattern found. \(87\%\) of all successful drivers followed the successful transition pattern, while \(95\%\) of all unsuccessful drivers followed the unsuccessful pattern, which hints towards a better performance of the unsuccessful pattern. Focusing on the orientation and preparation stages of the pattern alone, still \(82\%\) of both successful and unsuccessful transitions are being detected.
This analysis and first pattern model give an orientation on how to implement the human part of the confidence horizon, however, the transfer from post-processing to an online detection of the confidence horizon still has to be made. A more detailed report on the analysis and found pattern will be published in the near future [26].
The subjective, qualitative results from the balanced analysis provided a variety of indications for possible causes as well as further adaptation options for the HMI. For example, several subjects from all designs (\(n = 6\)) stated that they would like to see a TOR notice on the tablet. Furthermore, a clearer description of the hazard situation via a voice output instead of just a sound was desired (\(n = 4\)). The participants’ statements on perceived criticality, subjectively perceived takeover quality, and stress did not differ significantly between the designs, which is probably due to a small sample size. A detailed evaluation of the results and recommendations for the further development of the confidence horizon concept will be published in the near future.
7 Conclusion and Outlook
The initial concept of confidence horizon, in conjunctions with new ideas of diagnostic take over requests (described in more detail in the chapter by Herzberger et al.), helped us to open up a new direction of attention and ability sensitive design of automated and cooperative systems. The concept can support design and development teams in cooperative vehicle automation, but also in other domains where machines and humans cooperate, to dynamically balance abilities of agents, and to design and engineer the transitions of control in a more transparent way compared to the traditional “on/off”-thinking. With design explorations and experiments, some of which were described here, we were able to cut through a vast design and use space at least in the driving simulator, and to identify the most prominent dimensions of the vast space of possibilities. Even if we are far from really mastering this new space of attention- and ability-based transitions, the chances are good that in close cooperation with other research projects e.g., from the DFG priority program CoInCar, the first design patterns can already be transferred to real vehicles and products. Equally important, we have paved the ground for more research which will be necessary to fully master this design and use space of transitions, as an important aspect of cooperatively interacting vehicles and human machine cooperation.
Notes
- 1.
In Germany and other European countries, traffic rules do not allow the overtaking of other road users from the right when driving out of town, e.g. on highways.
- 2.
Successful means in this context that the driver took over and resolved the situation without causing a crash of the ego vehicle or other vehicles.
References
Alexander, C., Ishikawa, S., Silverstein, M.: A Pattern Language: Towns, Buildings, Construction, Center for Environmental Structure Series, vol. 2. Oxford University Press, New York, NY (1977)
Baltzer, M.C.A.: Interaktionsmuster der kooperativen Bewegungsführung von Fahrzeugen. Dissertation, Shaker Verlag and Dissertation, RWTH Aachen University, 2020, Aachen, DOI 40345 (2021). https://publications.rwth-aachen.de/record/818952
Baltzer, M.C.A.: Interaktionsmuster der kooperativen Bewegungsführung von Fahrzeugen: Lehr- und Forschungsgebiet Systemergonomie/Lehrstuhl und Institut für Arbeitswissenschaft. Dissertation, Shaker Verlag and Dissertation, RWTH Aachen University, 2020, Aachen, DOI 40345 (2021). https://publications.rwth-aachen.de/record/818952
Baltzer, M.C.A., Altendorf, E., Meier, S., Flemisch, F.: Mediating Interaction between Human and automation during the arbitration processes in cooperative guidance and control of highly automated vehicles: base concept and first study. In: Ahram, T., Karwowski, W., Marek, T. (eds.) Proceedings of the 5th International Conference on Applied Human Factors and Ergonomics AHFE 2014, AHFE International, Kraków, Poland, pp. 2107–2118 (2014)
Borchers, J.O.: A pattern approach to interaction design. In: Boyarski, D., Kellogg, W.A. (eds.) Proceedings of the Conference on Designing Interactive Systems Processes, Practices, Methods, and Techniques—DIS ’00, pp. 369–378. ACM Press, New York, New York, USA (2000). https://doi.org/10.1145/347642.347795
Flemisch, F.: Pointillistische Analyse der visuellen und nicht-visuellen Interaktionsressourcen am Beispiel Pilot-Assistentensystem. Ph.D. thesis, Universität der Bundeswehr München (2001)
Flemisch, F., Herzberger, N., Usai, M., Baltzer, M., Schwalm, M., Voß, G., Krems, J., Quante, L., Trommler, D., Strelau, N., Burger, C., Stiller, C.: Cooperative Hub for Cooperative Research on Cooperatively Interacting Vehicles: Use Cases, Design and Interaction Patterns. Springer (in Press)
Flemisch, F., Adams, C.A., Conway, S.R., Goodrich, K.H., Palmer, M.T., Schutte, P.C.: The H-Metaphor as a guideline for vehicle automation and interaction (2003). https://ntrs.nasa.gov/citations/20040031835
Flemisch, F., Schieben, A., Kelsch, J., Löper, C.: Automation spectrum, inner/outer compatibility and other potentially useful human factors concepts for assistance and automation. In: de Waard, D., Flemischm, F., Lorenz, B., Oberheid, H., Brookhuis, K.A. (eds.) Human Factors for assistance and automation. Shaker Publishing (2008). https://elib.dlr.de/57625
Flemisch, F., Heesen, M., Hesse, T., Kelsch, J., Schieben, A., Beller, J.: Towards a dynamic balance between humans and automation: authority, ability, responsibility and control in shared and cooperative control situations. Cogn., Technol. Work. 14(1), 3–18 (2012). https://doi.org/10.1007/s10111-011-0191-6
Flemisch, F., Altendorf, E., Canpolat, Y., Weßel, G., Baltzer, M.C.A., López Hernández, D., Herzberger, N.D., Voß, G., Schwalm, M., Schutte, P.: Uncanny and unsafe valley of assistance and automation: first sketch and application to vehicle automation. In: Advances in Ergonomic Design of Systems, Products and Processes, pp. 319–334. Springer, Berlin, Heidelberg (2017). https://doi.org/10.1007/978-3-662-53305-5_23
Flemisch, F., Preutenborbeck, M., Baltzer, M., Wasser, J., Meyer, R., Herzberger, N., Bloch, M., Usai, M., Lopez, D.: Towards a balanced analysis for a more intelligent human systems integration. In: Advances in Intelligent Systems and Computing, pp. 31–37. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-68017-6_5
Flemisch, F., Preutenborbeck, M., Baltzer, M.C.A., Wasser, J., Kehl, C., Grünwald, R., Pastuszka, H.M., Dahlmann, A.: Human systems exploration for ideation and innovation in potentially disruptive defense and security systems. In: Advanced Sciences and Technologies for Security Applications, pp. 79–117. Springer International Publishing, Cham (2022). https://doi.org/10.1007/978-3-031-06636-8_5
Flemisch, F., Usai, M., Herzberger, N.D., Baltzer, M.C.A., Hernandez, D.L., Pacaux-Lemoine, M.P.: Human-machine patterns for system design, cooperation and interaction in socio-cyber-physical systems: introduction and general overview. In: 2022 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp. 1278–1283. IEEE (2022). https://doi.org/10.1109/SMC53654.2022.9945181
Flemisch, F., Usai, M., Wessel, G., Herzberger, N.: Human system patterns for interaction and cooperation of automated vehicles and humans. at - Automatisierungstechnik 71(4), 278–287 (2023). https://doi.org/10.1515/auto-2022-0160
Gamma, E., Helm, R., Johnson, R., Vlissides, J.: Design patterns: abstraction and reuse of object-oriented design. In: European Conference on Object-Oriented Programming, pp. 406–431. Springer, Berlin, Heidelberg (1993). https://doi.org/10.1007/3-540-47910-4_21
Guo, H., Zhang, Y., Cai, S., Chen, X.: Effects of level 3 automated vehicle drivers’ fatigue on their take-over behaviour: a literature review. J. Adv. Transp. 2021, 1–12 (2021). https://doi.org/10.1155/2021/8632685
Herzberger, N., Usai, M., Schwalm, M., Flemisch, F.: Cooperation Between Vehicle and Driver: Predicting the Driver’s Takeover Capability in Cooperative Automated Driving Based on Orientation Patterns (in Press)
Herzberger, N.D., Eckstein, L., Schwalm, M.: Detection of missing takeover capability by the orientation reaction to a takeover request. In: 27th Aachen Colloquium Automobile and Engine Technology 2018, pp. 1231–1240 (2018)
Herzberger, N.D., Usai, M., Flemisch, F.: Confidence horizon for a dynamic balance between drivers and vehicle automation: first sketch and application. Hum. Factors Transp. (2022). AHFE International. https://doi.org/10.54941/ahfe1002431
Löper, C., Kelsch, J., Flemisch, F.: Kooperative, manöverbasierte Automation und Arbitrierung als Bausteine für hochautomatisiertes Fahren. In: AAET—Automatisierungs-, Assistenzsysteme und eingebettete Systeme für Transportmittel, Net_372work, GZVB, Braunschweig (2008)
López Hernández, D., Vorst, D., Baltzer, M.C.A., Bielecki, K., Flemisch, F.: Parts of a whole: first sketch of a block approach for interaction pattern elements in cooperative systems. In: Mařík, V. (ed.) International Conference on Systems, Man, and Cybernetics. IEEE (2022)
Rhede, J., Wäller, C., Oel, P.: Der FAS Warnbaukasten. Strategie fuer die systematische Entwicklung und Ausgabe von HMI-Warnungen. In: 6. VDI-Tagung Der Fahrer im 21. Jahrhundert, VDI Verlag, Düsseldorf, VDI-Berichte (2011). https://trid.trb.org/view/1217567
SAE: SAE International Standard J3016: Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles (2021)
Sebanz, N., Bekkering, H., Knoblich, G.: Joint action: bodies and minds moving together. Trends Cogn. Sci. 10(2), 70–76 (2006). https://doi.org/10.1016/j.tics.2005.12.009
Usai, M., Herzberger, N., Flemisch, F.: Understanding human ability and intention to improve cooperative automated driving takeovers following a pattern approach, submitted to IEEE SMC 2023 (2023)
Winkler, S., Werneke, J., Vollrath, M.: Timing of early warning stages in a multi stage collision warning system: drivers’ evaluation depending on situational influences. Transport. Res. F: Traffic Psychol. Behav. 36, 57–68 (2016). https://doi.org/10.1016/j.trf.2015.11.001
de Winter, J.C.F., Happee, R., Martens, M.H., Stanton, N.A.: Effects of adaptive cruise control and highly automated driving on workload and situation awareness: a review of the empirical evidence. Transport. Res. F: Traffic Psychol. Behav. 27, 196–217 (2014). https://doi.org/10.1016/j.trf.2014.06.016
Acknowledgements
This publication was funded within the Priority Programme 1835 “Cooperative Interacting Automobiles (CoInCar)” of the German Science Foundation (DFG).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2024 The Author(s)
About this chapter
Cite this chapter
Usai, M., Herzberger, N., Yu, Y., Flemisch, F. (2024). Confidence Horizons: Dynamic Balance of Human and Automation Control Ability in Cooperative Automated Driving. In: Stiller, C., Althoff, M., Burger, C., Deml, B., Eckstein, L., Flemisch, F. (eds) Cooperatively Interacting Vehicles. Springer, Cham. https://doi.org/10.1007/978-3-031-60494-2_18
Download citation
DOI: https://doi.org/10.1007/978-3-031-60494-2_18
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-60493-5
Online ISBN: 978-3-031-60494-2
eBook Packages: EngineeringEngineering (R0)