Abstract
This chapter describes central cooperative activities in the research priority program Cooperatively Interacting Vehicles (CoInCar). If the whole research program CoInCar can be seen as a wheel, which is turning research questions into answers, knowledge and hopefully progress for society, the individual research projects described in the other chapters could be seen as spokes of the wheel, and the aspects described in this chapter as an informal cooperative hub of the wheel. Starting with common essential definitions, a use case catalogue was derived and documented. Based on that, cooperation and interaction pattern were sketched and documented into a pattern database. While the details of the research hub described here are specific for this DFG priority program, the general principles of a research hub could be transferred to any other research and development activity.
You have full access to this open access chapter, Download chapter PDF
Keywords
1 Introduction: The Big Picture—From Cooperative Homo Heidelbergensis to Cooperative Human Machine Systems
In general, movement through space and time is a vital feature of life. Movement in form of mobility, by foot, bicycle, car, train or airplane is an important part of our life as individuals, organizations and societies. Cooperation in contrast to competition is a central aspect of mobility already for a long time, and was already quite important for our development as homo sapiens, as the following example shows.
Figure 1 shows one of ten wooden spears which were excavated in 1994ff at Schoeningen near Braunschweig in the north of Germany. These throwing spears, dated between 380,000 and 400,000 years old, represent the oldest preserved hunting weapons of prehistoric Europe yet discovered [34]. These spears are not only an early example of weapon technology, but also for an art which is much later called Human Factors (Engineering), for which Homo Heidelbergensis, a prerunner of Homo Sapiens, was already able to combine different techniques like cutting to carve, and fire to harden an effective tool and adapt it to the individual bearer. These spears are also an early example of Human Systems Integration, which in the 21st millennium is understood as integration of humans, technology, organization, and environment, and which was already an important factor for Homo Heidelbergensis: Close to the location of the spears, many horse bones were found. Anthropologists reconstructed that a tribe of Homo Heidelbergensis hunted, rounded up, speared, and ate these horses. Especially the production of the spears, which can be seen as a clever use of or integration with the environment, and the cooperative hunting took a degree of organization, which was not available to other rival species.
It is obvious that movement and mobility in combination with these tools was one of the key factors for success of these homo tribes. But how could these relatively slow species round up and eat other species which were physically much faster and stronger? The key issue can be found not so much on the physical layer, but on the cognitive layer of evolution: Tomasello [35] describes how human cognition evolved together with the ability to create and handle such tools, and especially how the cooperation and shared intentionality e.g. of hunting AND tool manufacturing fostered the evolution of homo towards homo sapiens as one of the most dominant species on this planet. Cooperation and teaming were obviously essential for early hunting. Cooperation and teaming might also be essential for cooperatively interacting vehicles, and for researchers and developers addressing these complex cooperative systems. It is not by accident that communities of research institutions applying for a research grant nickname themselves “hunting communities” or “hunting tribes”, but by insight that the ability to cooperate might be similarly important with hunting and with research.
What Tomasello [35] describes as shared intentionality, other researchers like Norman [28] or Gentner [16] describe as shared mental models (c.f. Fig. 2), where mismatches between the mental model of system designer and system users might lead to dangerous errors in design or use of sociotechnical systems. Mental models are also at the very core of cooperatively interacting vehicles, and of the cooperative research on cooperative interacting vehicles.
Applied to cooperatively interacting vehicles, the setup of cooperatively interacting vehicles might use similar cognitive capacities which already helped homo heidelbergensis to move cooperatively, but might include a new complexity: Here, not only individuals and groups of homo sapiens are involved, but also a new player on the cognitive evolution: The computer. In less than a century from its first invention by Konrad Zuse in 1941, the computer and later Artificial Intelligence (AI) has become a central player in sociotechnical systems. The teaming of computer and humans is already hinted in Wieners famous book about Cybernetics in 1950, where he describes feedback loops as the central mechanism of intelligence both in the animal and the machine. Later Licklider [24] describes symbiotic human—computer systems. Rasmussen [31] proposed the term cooperation, Hollnagel and Woods [22] and Sheridan [33] defined initial principles, Hoc and Lemoine [20] and Hoc [19] described the common ground and know-how-to-cooperate as important parts of developing human computer cooperation.
A major breakthrough was to think of cognition not only as something separated/assigned to individual agents, but also as something which is combined or joined between the different players, i.e. Joint Cognition or Joint Cognitive Systems. Hollnagel sketches how these Joint Cognitive Systems can be nested, from the small to the big, and already prepared the ground for a system of systems approach. System of systems can be understood as the joining of individual systems which “deliver important emergent properties, which have an evolving nature that stakeholders must recognize, analyze and understand” (e.g. Maier [27]). Flemisch et al. [9] described how humans and machine cognition in system of systems can cooperate on levels with different time frequencies, yet still work together like the blunt end and the sharp end of a spear. Flemisch et al. [13] extended this view also to conflicts that can happen between agents in cognitive systems, and how these can be mitigated. Examples for conflicts are different intentions of humans and machines, e.g. vehicle automation, of where to go and how fast. Flemisch et al. [11] describe a holistic bowtie model of meaningful and effective control, which brings together the individual human-machine system with a system-of-systems, organizational, societal and environmental perspective. Cooperation between these layers are—once again—a central key for failure or success of these systems.
As already hinted by Hoc [19], the key of any cooperative activity is to have sufficient common ground and common work space between humans and computers in the form of shared mental models. This proved to apply even more so for cooperatively interacting vehicles. Related to common ground is the concept of inner and outer compatibility (e.g. Flemisch et al. [8]), which describes the ability of interfaces on the outside system border between humans and machines to play together, and the ability of inner mental models to interact in a cooperative way.
In general, the development of shared mental models does not start from scratch but is always a development and migration. Starting point could be basic image schemes which we inherited from our ancestors (e.g. Lakoff [23], Baltzer [3]), patterns we learned during our life to deliberate discussions in our research and development community on how sociotechnical systems, here cooperatively interacting vehicles, should work together amongst themselves and with the humans involved. In this ongoing effort to shape the mental models, it is important that mental models evolve cooperatively. They are never all up to date at the same time, as Fig. 3 shows.
Applied to cooperatively interaction vehicles, Fig. 3 shows an example of inconsistent mental models, where the human on board of an automated vehicle assumes that the vehicle automation has the control, while the other user assumed that the human was in control. Such a misunderstanding already led to a deadly accident in 2018 with a highly automated vehicle operated by Uber [29].
Figure 4 depicts a very simplified model of how shared mental models in the research community of automated and interconnected driving might have evolved, starting with a simple “black and white” model of manual or fully automated driving, then the intense discussions on different levels of automation, sparked by the theoretic work of Parasuraman et al. [30], boosted with insights like the H(orse)-Metaphor [7], the first formulation of highly automated driving, and its practical solutions like H-Mode [2] or Conduct by Wire [37], leading to the BASt and SAE levels of driving automation [15, 32]. With CoInCar, we entered a new stage of automation, which still uses levels of driving automation, but connects the differently automated users and automations with cooperative driving patterns.
2 Bringing Researchers Together: Concepts and Definitions Wiki, Ph.D. Workshops
In general, common ground and common mental models for researchers usually do not start with definitions, but with common inspiration, ideas and visions, as vague or fuzzy as they might initially be. Only if this inspirational common ground is assured first, the more tedious work on common concepts and definitions has a chance to succeed. Even then, with complex systems and interdisciplinary teams, it is often impossible to achieve a similar crispness of definitions, as scientist were able to achieve in physical sciences. Especially in the integration of humans, technology, organization and the environment, so many disciplines are involved, that crisp definitions like in physics are highly unpractical, but a higher plasticity of concepts and definitions has to be tolerated from the very beginning, if the definitions should really open the chance to converge between disciplines.
Applied to CoInCar, in a series of workshops in mixed subgroups, concepts and definitions were worked out, documented in a Wiki and deconflicted over the duration of the project (see Fig. 5). It is important to note that the approach was not to deconflict all differences in the definitions—this alone would have consumed most of the research budgets—but to find a common ground just big enough to start cooperations, and to further consolidate the Wiki “on the job”.
For further networking within the priority program, a series of structured activities took place for the Ph.D. researchers. For example, every two years there was a cross-project Ph.D. workshop in which the researchers presented their research topics, discussed, and identified links between the subprojects. Furthermore, individual disciplines had regular Ph.D. workshops. One example is the regular meeting for human factors researchers, which met once a month for one hour in a hybrid format. Here, short presentations were given in a rotating process and the researchers own progresses and difficulties were exchanged and discussed within the group in order to benefit from the experiences of the other research groups.
3 Bringing Researchers and Developers Together: Use Case Catalogues
In general, to really understand and master complex sociotechnical systems with all their possible combinations, a system analysis can lay the structure for interdisciplinary teams. Over the years of Systems Engineering and Human Systems Integration, a structuring of systems in problem and solution space (e.g. Haberfellner et al. [18]), and in design space, use space and value space has shown good results of mastering the complexity (for an overview and history of these concepts and their application to the exploration of human-machine systems, see e.g. Flemisch et al. 2022a).
Applied to CoInCar, the alignment of mental models of researchers started with the use space, i.e. the dimensions of use and their combination into use cases and use situations. Based on the positive experience in EU-projects on highly automated driving of working with use cases (e.g. Hoeger et al. [21]), and research efforts to find define a unified ontology for test and use case catalogues in DFG-projects before CoInCar (e.g. Geyer et al. [17]), CoInCar started with the discussion, selection and definition of initial use cases of cooperatively interacting vehicles.
Figure 6 shows the use case tree of CoInCar as an overview of use cases addressed in the consortium. Starting with the use case family of obstacle avoidance, the use case families of lane change, intersection, parking and roundabout are identified, and individual use cases documented. In deconflicting sessions the use cases were discussed and if possible aligned. The use case catalogue also served as a map to explain the priority program, and to onboard new researchers.
4 Bringing Researchers, Developers and Users Together: Pattern Catalogue of Cooperatively Interacting Vehicles
In general, complex systems can be decomposed into system models, use space, design space and value space. This helps with the understanding of the individual components of the system, but not yet with the understanding of the relations and only partially with the recomposing and designing of system variants. Seeing design, use and value space as systems themselves, and taking Luhmann’s argument “Contact happens at the border” [26] of these systems seriously, it is crucial to find a way to describe the interplay of these dimensions as a combination which has a combined effect: How a certain system design is used by the user, and what effects this has on users and the surrounding system. The challenge for this to find a representation that really grasps the essence of a specific design, use and value in a way that it is general enough to be reused, and understandable enough that it can connect researchers, developers and users.
An essential concept to achieve this are patterns, here as design and interaction patterns. Patterns can be traced back to the philosophical theory of Forms (e.g. Plato 427 B.C.). Just neglecting the long philosophical dispute whether forms are something outside of the physical world or just mental models in the brain of the analysts, Christopher Alexander described architecture as a language of design patterns [1]. This concept was transferred to software design patterns by Gamma et al. [14], to human computer interaction by Borchers [4] and to human-machine systems e.g. by Flemisch [6], Baltzer [3]. Based on Alexander’s initial definition, Flemisch et al. [10] understand patters as follows:
A pattern describes something that occurs over and over again. An example for this is a problem and/or its solutions. If this can be observed, and its core can be mapped and modelled, you can either observe and match the pattern over and over again, without ever making the identical observation twice. And/or you can instantiate and design with this pattern over and over again, not necessarily doing it the same way twice. Examples for this are designing, engineering and using of artefacts like human-machine systems. (Flemisch et al. [10] based on Alexander et al. [1])
Patterns bridge the more concrete world of applications with the more abstract world of concepts, and can provide a common mental model of the sociotechnical system and its principal understanding, design and usage. With that, patterns can be a crucial technique to bring designers, engineers, users and other stakeholder together (see Fig. 7).
Patterns can be based on use cases or use situations, and then describe how the use is usually happening with which results. This can be described on different levels of detail, e.g. very general usage (e.g. Baltzer [3]) up to a more detailed description of the interaction happening in a certain use situation (e.g. Flemisch [6], López Hernández et al. [25]).
Patterns can be freely formed, or transferred between domains, e.g. from the biosphere to the technosphere. A striking example, shown in Fig. 8, for the potential of transferring design and interaction patters is flying, where Otto Lilienthal systematically evaluated the flight of birds, and transferred the most important principles to design patterns e.g. of wings, foils etc., which still form the basis of flying today.
Applied to CoInCar, patterns influenced the scientific undertaking from the very beginning, e.g. in form of the H-Metaphor as an inspiration for the cooperation between automation and the driver, but also the cooperation between vehicles as a herd or flock. More inspiration came from other domains like dancing, where a common understanding of figures and movements, i.e. patterns, allow dancers to move together and enjoy it (see Fig. 9).
More concretely, the pattern concept was introduced in the second half of the CoInCar focus program, discussed and refined in a series of workshops.
Based on this conceptual work between the individual research groups, a first database was built up and filled for a first test. Figure 10 shows a fundamental pattern “inform, warn, intervene” as an example for a cooperation pattern, which was used in a couple of use cases of CoInCar. Figure 11 shows another fundamental family of patterns “Transition of control”, which were investigated in a couple of explorations and experiments in CoInCar.
5 Conclusion and Outlook
It is important to note that focus programs are usually not as rigidly organized as e.g. excellence clusters or even industrial research and development projects. Organizing a research hub like in CoInCar, based on a concept Wiki, a use case catalogue and a first pattern database was an exploration of ideas, with promising first results, but far from providing complete catalogues or databases which are now ready to use in industry. Nevertheless, these results can provide an inspiration, or a concrete first core for more rigid research and development projects in the realm of cooperatively interacting vehicles, or beyond in the realm of cooperating human-machine systems including human—AI systems.
We see a huge potential in the combination of use cases and design/interaction patterns, which can clearly help to manage the complexity of future cooperative systems. Our vision is that the know-how about human machine patterns and their usage is increasingly collected in easy to access and easy to use data bases, ideally globally (c.f. Fig. 12). The key will be to provide the human machine patterns in a way that it can be easily used in design, engineering, and research activities, so that know-how can flow freely back and forth between researchers, designers, engineers, users and policy makers. This could also be the core for incident and accident databases that, along with cooperative research and development, could make our world safer, more sustainable, and more fun and joy to live in.
References
Alexander, C., Ishikawa, S., Silverstein, M.: A Pattern Language: Towns, Buildings, Construction, Center for Environmental Structure Series, vol. 2. Oxford University Press, New York, NY (1977)
Altendorf, E., Baltzer, M., Kienle, M., Meier, S., Weißgerber, T., Heesen, M., Flemisch, F.: H-Mode 2D. In: Handbuch Fahrerassistenzsysteme, pp. 1123–1138. Springer Vieweg, Wiesbaden (2015). https://doi.org/10.1007/978-3-658-05734-3_60
Baltzer, M.C.A.: Interaktionsmuster der kooperativen Bewegungsführung von Fahrzeugen. Dissertation, Shaker Verlag and Dissertation, RWTH Aachen University, 2020, Aache DOI 40345 (2021). https://publications.rwth-aachen.de/record/818952
Borchers, J.O.: A pattern approach to interaction design. In: Boyarski, D., Kellogg, W.A. (eds.) Proceedings of the Conference on Designing Interactive Systems Processes, Practices, Methods, and Techniques—DIS ’00, pp. 369–378. ACM Press, New York, New York, USA (2000). https://doi.org/10.1145/347642.347795
Canpolat, Y., Voß, G.M.I., Herzberger, N.D.: Use Case Catalogue (2016)
Flemisch, F.: Pointillistische Analyse der visuellen und nicht-visuellen Interaktionsressourcen am Beispiel Pilot-Assistentensystem. Ph.D. thesis, Universität der Bundeswehr München (2001)
Flemisch, F., Adams, C.A., Conway, S.R., Goodrich, K.H., Palmer, M.T., Schutte, P.C.: The H-Metaphor as a guideline for vehicle automation and interaction (2003). https://ntrs.nasa.gov/citations/20040031835
Flemisch, F., Schieben, A., Kelsch, J., Löper, C.: Automation spectrum, inner/outer compatibility and other potentially useful human factors concepts for assistance and automation. In: de Waard, D., Flemisch, F., Lorenz, B., Oberheid, H., Brookhuis, K.A. (eds.) Human Factors for Assistance and Automation. Shaker Publishing (2008). https://elib.dlr.de/57625
Flemisch, F., Abbink, D.A., Itoh, M., Pacaux-Lemoine, M.P., Weßel, G.: Joining the blunt and the pointy end of the spear: towards a common framework of joint action, human-machine cooperation, cooperative guidance and control, shared, traded and supervisory control. Cogn. Technol. Work 21(4), 555–556 (2019)
Flemisch, F., Usai, M., Herzberger, N.D., Baltzer, M.C.A., Hernandez, D.L., Pacaux-Lemoine, M.P.: Human-machine patterns for system design, cooperation and interaction in socio-cyber-physical systems: introduction and general overview. In: 2022 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp 1278–128. IEEE (2022). https://doi.org/10.1109/SMC53654.2022.9945181
Flemisch, F., Baltzer, M.C.A., Abbink, D.A., Siebert, L.C., van Diggelen, J., Herzberger, N.D., Draper, M., Boardman, M., Pacaux-Lemoine, M.P., Wasser, J.: Towards a dynamic balance of humans and AI-based systems within our global society and environment—holistic bowtie model of meaningful human control over effective systems. In: van den Hoven, J., Abbink, D.A., Santoni de Sio, F., Amoroso, D., Mecacci, G., Siebert, L. (eds.) Meaningful Human Control of Artificial Intelligence Systems (in Press)
Flemisch, F., Abendroth, B., Bengler, K., Peters, S., Vortisch, P.: Migration of Road Vehicle Automation (Submitted)
Flemisch, F.O., Pacaux-Lemoine, M.P,. Vanderhaegen, F., Itoh, M., Saito, Y., Herzberger, N., Wasser, J., Grislin, E., Baltzer, M. (2020) Conflicts in human-machine systems as an intersection of bio- and technosphere: cooperation and interaction patterns for human and machine interference and conflict resolution. In: 2020 IEEE International Conference on Human-Machine Systems (ICHMS), pp. 1–6. https://doi.org/10.1109/ICHMS49158.2020.9209517
Gamma, E., Helm, R., Johnson, R., Vlissides, J.: Design patterns: abstraction and reuse of object-oriented design. In: European Conference on Object-Oriented Programming, pp. 406–431. Springer, Berlin, Heidelberg (1993). https://doi.org/10.1007/3-540-47910-4_21
Gasser, T.M., Arzt, C., AYoubi, M., Bartels, A., Buerkle, L., Eier, J., Flemisch, F., Haecker, D., Hesse, T., Huber, W., Lutz, C., Maurer, M., Ruth-Schumacher, S., Schwarz, J., Vogt, W.: Rechtsfolgen zunehmender Fahrzeugautomatisierung. 83, Wirtschaftsverlag NW (2012). http://bast.opus.hbz-nrw.de/volltexte/2012/587/pdf/F83.pdf
Gentner, D.: Mental models, psychology of. In: Sills, D.L. (ed.) International Encyclopedia of the Social and Behavioral Sciences/edited by Neil J. Smelser and Paul B. Baltes, pp. 9683–9696. Elsevier Science, New York (2001). https://doi.org/10.1016/B0-08-043076-7/01487-X
Geyer, S., Baltzer, M., Franz, B., Hakuli, S., Kauer, M., Kienle, M., Meier, S., Weißgerber, T., Bengler, K., Bruder, R., Flemisch, F., Winner, H.: Concept and development of a unified ontology for generating test and use-case catalogues for assisted and automated vehicle guidance. IET Intell. Transp. Syst. 8(3), 183–189 (2014). https://doi.org/10.1049/iet-its.2012.0188
Haberfellner, R., de Weck, O., Fricke, E., Vössner, S.: Systems Engineering: Fundamentals and Applications. Birkhäuser, Cham, Switzerland (2019)
Hoc, J.M.: From human-machine interaction to human-machine cooperation. Ergonomics 43(7), 833–843 (2000). https://doi.org/10.1080/001401300409044
Hoc, J.M., Lemoine, M.P.: Cognitive evaluation of human-human and human-machine cooperation modes in air traffic control. Int. J. Aviat. Psychol. 8(1), 1–32 (1998). https://doi.org/10.1207/s15327108ijap0801_1
Hoeger, R., Zeng, H., Hoess, A., Kranz, T., Boverie, S., Strauss, M., Jakobsson, E., Beutner, A., Bartels, A., To, T.B., Stratil, H., Fürstenberg, K., Ahlers, F., Frey, E., Schieben, A., Mosebach, H., Flemisch, F., Dufaux, A., Manetti, D., Amditis, A., Mantzouranis, I., Lepke, H., Szalay, Z., Szabo, B., Luithardt, P., Gutknecht, M., Schoemig, N., Kaussner, A., Nashahibi, F., Resende, P., Vanholme, B., Glaser, S., Allemann, P., Seglö, F., Nilsson, A.: The future of driving–HAVEit (Final Report, Deliverable D61. 1) (2011)
Hollnagel, E., Woods, D.D.: Cognitive systems engineering: new wine in new bottles. Int. J. Man Mach. Stud. 18(6), 583–600 (1983). https://doi.org/10.1016/s0020-7373(83)80034-0
Lakoff, G.: Women, Fire, and Dangerous Things: What Categories Reveal about the Mind. University of Chicago Press, Chicago (1990)
Licklider, J.C.R.: Man-computer symbiosis. IRE Trans. Hum. Factors Electron. HFE-1(1), 4–11 (1960). https://doi.org/10.1109/thfe2.1960.4503259
López Hernández, D., Vorst, D., Baltzer, M.C.A., Bielecki, K., Flemisch, F.: Parts of a whole: First Sketch of a block approach for interaction pattern elements in cooperative systems. In: Mařík, V. (ed.) International Conference on Systems, Man, and Cybernetics. IEEE (2022)
Luhmann, N.: Soziale systeme: Grundriss einer allgemeinen Theorie. Suhrkamp (1984). https://ixtheo.de/record/040204065
Maier, M.W.: Architecting principles for systems-of-systems. Syst. Eng. 1(4), 267–284 (1998). https://doi.org/10.1002/(SICI)1520-6858(1998)1:4<267::AID-SYS3>3.0.CO;2-D
Norman, D.A.: The Psychology of Everyday Things. Basic Books (1988)
NTSB, National Transportation Safety Board: Highway Accident Report NTSB/HAR-19/03: Collision Between Vehicle Controlled by Developmental Automated Driving System and Pedestrian (2019). https://www.ntsb.gov/investigations/AccidentReports/Reports/HAR1903.pdf
Parasuraman, R., Sheridan, T.B., Wickens, C.D.: A model for types and levels of human interaction with automation. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 30(3), 286–29 (2000). https://doi.org/10.1109/3468.844354
Rasmussen, J.: Skills, rules, and knowledge; signals, signs, and symbols, and other distinctions in human performance models. IEEE Trans. Syst. Man Cybern. SMC-13 (3), 257–266 (1983). https://doi.org/10.1109/tsmc.1983.6313160
SAE: SAE International Standard J3016: Taxonomy and Definitions for Terms related to Driving Automation Systems for On-Road Motor Vehicles (2021)
Sheridan, T.B.: Humans and automation: system design and research issues (2002). https://www.cambridge.org/core/services/aop-cambridge-core/content/view/s0263574702274858
Thieme, H.: Lower Palaeolithic hunting spears from Germany. Nature 385(6619), 807–81 (1997)
Tomasello, M.: A Natural History of Human Thinking. Harvard University Press (2014)
Weßel, G., Herzberger, N.D.: Concept and definition Wiki of the CoInCar project (2018)
Winner, H., Hakuli, S.: Conduct-by-wire–following a new paradigm for driving into the future (2006)
Acknowledgements
This publication was funded within the Priority Programme 1835 “Cooperative Interacting Automobiles (CoInCar)” of the German Science Foundation (DFG).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2024 The Author(s)
About this chapter
Cite this chapter
Flemisch, F. et al. (2024). Cooperative Hub for Cooperative Research on Cooperatively Interacting Vehicles: Use Cases, Design and Interaction Patterns. In: Stiller, C., Althoff, M., Burger, C., Deml, B., Eckstein, L., Flemisch, F. (eds) Cooperatively Interacting Vehicles. Springer, Cham. https://doi.org/10.1007/978-3-031-60494-2_16
Download citation
DOI: https://doi.org/10.1007/978-3-031-60494-2_16
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-60493-5
Online ISBN: 978-3-031-60494-2
eBook Packages: EngineeringEngineering (R0)