Keywords

4.1 Results of Delphi Study

Population Survey: Research Questions, Results, and Explanation

The population survey examines the questions of how artificial intelligence (AI) will find its way into people’s work and private lives and what acceptance such systems will meet in terms of application options in people’s everyday lives.

The survey reveals that robots and AI are generally seen as thoroughly positive. Only 20% of respondents had a negative view of these two topics. Furthermore, the proportion of people who doubt the necessity of robots for society is only less than 10%. Accordingly, a high degree of willingness to accept robots can be assumed among the population.

Nevertheless, a deeper look into these topics shows that this result must be considered critically when it comes to “reliability of systems” and their “areas of application.” On the first point reliability of systems only every third to fourth person surveyed considers robotic systems and AI to be reliable and error-free systems at the present time (for humans, safe and trustworthy technologies). However, when it comes to the need for robotic systems and AI regarding work that is too difficult or too dangerous for humans, the use is considered very likely. The more specific question on areas of application confirmed this result of the survey and showed very clearly that the acceptance of AI is depending on the area of application. By contrast, the use of robotic systems in the home environment or in the care of people is approached much more critically than the use of these technologies in the areas of space and deep-sea research.

This result is also reflected in the following questions of the representative survey: “In which areas should robots be used as a priority?” and “In which areas should robots not be used at all?”

The answers were given to the respondents in the form of a list of areas, so that a limited but covered answer option was already provided that addressed all areas of the study. Respondents were given the option of selecting up to five areas from this list: Industry, commercial, service sector, private life, medicine, human care, education, search and rescue, space exploration, marine/deep-sea exploration, transportation/logistics, agriculture, military, or in no field.

The evaluation showed that the respondents see the use of robots in the areas of space and deep-sea research predominantly in second and third place in percentage terms. Similarly, the area of search and rescue was seen as highly ranked (Fig. 4.1). In contrast, respondents had difficulty imagining the use of robots in the areas of human care (Fig. 4.2). In this ranking, areas of space and deep-sea research were not mentioned by respondents at all. Accordingly, areas that are rather distant and foreign to humans in general, both thematically and in terms of habitat.

Fig. 4.1
An illustration of the ranking of the application area of robot usage in terms of percentage. The industrial sector ranks at the top.

Priority preference ranking of application area of robot

Fig. 4.2
An illustration depicts the ranking of the non-preferred application area of robot usage in percentage. The human care sector ranks at the top.

Ranking of non-preferred application area of robots

The first question that arises here is how these preferences may occur among the population. One fundamental point could be the “distance” factor of the operational area. For many people surveyed, the areas of space and the deep-sea represent a field of application that seems distant and very abstract. It does not touch everyday life and is inaccessible to the public. The area of home care, however, represents a very sensible application area people have personal associations with. In addition, there is the factor of “empathy” or “emotions,” which is generally not associated with a robotic system—a machine. In distant places of application, the latter factor is not considered. Here, the inaccessibility and the safety of the human being are in the foreground.

Based on the current state of general knowledge within the population, the use of robotic systems in challenging and hostile environments instead of areas in everyday life is therefore a reasonable conclusion. We will take this favoured application field here and take a closer look what exactly robots have to be capable of when operating in deep-sea or space and how this relates to a robot being perceived as an autonomous system. The results may give answers why the survey results have shown that currently respondents do not trust robotic systems well enough to let them operate in sensitive environments for humans, e.g., in the care domain.

Potential Mission Scenarios of Robotic Systems in Harsh Environments

A potential field of hostile application area for robotic systems is the exploration of planetary surfaces (see Fig. 4.3). In this possible mission scenario, different robotic systems work together on a defined task as a team. Systems with a longer range can explore the environment with the help of sensors and cameras and send more agile systems into areas to examine the environment in detail (Brinkmann et al., 2019). Different means of transportation can also be an advantage here due to the different undergrounds and strengths, so that some tasks can only be completed successfully in a team. Furthermore, systems with grippers can take ground samples and pass them to other systems for conservation. Another possible mission is the exploration of caves. Here, robotic systems can be lowered into these caves by other systems and the environment can be explored by cameras.

Fig. 4.3
A graphical illustration of 3 different robotic systems examining a crater on an extraterrestrial planetary surface.

Cooperative robotic team mission—Exploration of extraterrestrial planetary surfaces (Source: DFKI GmbH, Finn Lichtenberg)

In addition to extraterrestrial environments, the deep-sea on Earth is also an area of operation that represents a hostile and hardly accessible environment for humans (see Fig. 4.4). Here, in addition to the inspection and maintenance of infrastructures located on the seabed (cables, pipelines, offshore installations), the focus is also on the exploration of new areas that have not yet been discovered and/or are not accessible to humans, and thus on the research and answering of wide-ranging scientific questions. Robotic systems equipped with sensors can take over these tasks for humans or support them in their tasks.Footnote 1

Other examples represent the use of robotic systems in disaster areas to assist in human rescue and recovery, e.g., burial/collapse of buildings (Queralta et al., 2020). In this case, it is possible to drive camera-equipped robotic systems into areas that are difficult or impossible for humans to access in order to find potential victims, gain a general overview of the situation, and rescue them in the further progress. This avoids that human have to enter dangerous areas without knowing if there are people to be rescued in this area.

To use robots effectively in these or similar applications in the future, a clarity and a definition of the level of required and desired autonomy is necessary. This also shapes the required interaction with a human and our understanding of robots and humans working together. Mission operations in hostile environments—in space, in the deep-sea, or in hard-to-reach or existing catastrophic areas—are challenging and expose humans to significant hazards and risks. Robotic systems with high autonomy capabilities can help humans to reduce potential hazards and risks in a wide variety of situations. Furthermore, with the help of these systems, it is possible to explore or gain access to environments that were or still are inaccessible.

In addition to hostile environments, a growing number of robotic systems are finding their way into people’s everyday lives. In this case, it must be ensured that humans are supported in their activities and that any potential risk must be always ruled out.

In both cases, hostile environments and everyday lives scenarios, the degree of autonomy of a system can vary greatly depending on its use and task, as can the degree of human–robot interaction. A good work distribution is thus the essential requirement for successful cooperation between the system and the human.

Fig. 4.4
A photograph of a robotic system used in the inspection and maintenance of pipelines and cables in a seabed.

Mission scenario of a pipeline inspection mission (Source: DFKI GmbH, Jan Albiez)

4.2 Definition of Autonomy

First, a clear distinction must be given between the terms of autonomy and automation. The term “autonomy” is derived from the Greek (autonomia) and means self-reliance or independence. In various disciplines and subject areas, the term has different definitions. For example, in psychology and philosophy, autonomy is described as “the ability of people to possess free will and make self-determining decisions”.Footnote 2 In the case of a state, this means that it is able to make its own laws, govern itself, and make political decisions without interference from other states. Within a state, if an organization can function itself according to established rules, then it is autonomous (Dietz, 2013). Thus, “autonomy” refers to the right of an individual, group, or state to govern its own circumstances.

In robotics, there are a wide variety of approaches and models to define the term autonomy clearly and according to the underlying task and context in each case—a unifying definition is however still missing.

To illustrate how the term autonomy is depending on the perspective and the context of the application, let us take the example of a manufacturing facility in which robotic systems perform predefined automated tasks. E.g., consider the placing of an object on a conveyor belt by a gripper arm of a robot or the driving of platforms along predefined transport routes. The robotic systems used within the system do not make any decisions themselves. They are precise reproductions of motion and manufacturing sequences that have been tested and optimized to a high degree. Consequently, this is a highly automated manufacturing process. Now, if the process is changed slightly such that interactions with humans are required, the situation is completely changing, and a certain level of autonomy is required. Then, the systems used must respond to incoming sensor data and interact together with the human in an intelligent way to anticipate and react to actions and offer possible solutions to the human. Autonomous action would thus require decisions in individual situations for which these automated processes are not created. Such individual decision makings based on many factors that cannot be automated unify the existing definitions of autonomy and the variations and uncertainty that an autonomous robot has to deal with, come from the environment that could consist of simply the operation area, other robots or humans. Clearly, interaction with humans is among the highest challenges on autonomy of robotic systems, but in any case a clear distinction must be made between the activity (sequence of defined tasks) and the behavior (autonomous decision) of the system.

The successful distribution of work within a team of autonomous agents (AI agents, robots, humans) is also dependent on the respective application context. Every situation has different influences and depends on many factors, which can also change during an action, resulting in very diverse requirements regarding the autonomy of a system.

Robotic systems can be classified according to their underlying level of autonomy. In general, a distinction is made between non-autonomous (teleoperated, controlled) and fully autonomous systems, although there are different degrees of autonomy within these categories (Kunze et al., 2018). This depends on the already mentioned requirement and complexity of the task to be fulfilled by the system. A fully autonomous system must have the competence to adapt its own action to the environment, the involved further systems, and/or humans, always with respect to the situation and to plan, replan, and react adequately to occurring change. All these actions must be highly dynamic and realizable in real time. This poses an enormous challenge to the system.

Based on this, most of the missions currently taking place involve a human being who is supported in his tasks with the help of the systems. This applies to missions in hostile environments as well as in everyday situations. These are typically teleoperation systems, which means that the robotic system carries out a task controlled by the human. By means of different communication channels, the system receives task and actions, thus enables the human to perform the task from a safe distance.

Fully autonomous systems, on the other hand, perform their tasks independently based on stated goals. This means that despite changes within the context, they find possible solutions and can make decisions—without the involvement of humans (Yanco & Drury, 2004; Endsley & Kaber, 1999).

Accordingly, autonomous systems are systems that have the ability and properties to independently achieve a task or goal(s) specified by humans without requiring human intervention within the selected solution path. The basis for this is that the system understands itself in its context via sensors and can respond to unpredictable situations based on given learning algorithms and react if necessary, so that the task or goal is still achieved.

4.3 Robots in Harsh Environments: Space and Underwater

Robotics and AI in General

Modern robotics can be interpreted as an embodiment of AI. Very high standards apply here, as systems must often rapidly interact with and behave in the world. This makes robotics an integrator for AI and certainly a field that integrates additional disciplines as well: Robots have a body with certain design, mechanics, and electronics, sensors and actuators, data flows and software programs that link it all together so that robots can interact with their environment.

Some examples of robots have already arrived in our everyday lives, as there are already product-ready systems that can be used for everyday applications. There are various examples, such as the robot as a lawn mower, vacuum cleaner, or mopping robot. The first systems that came onto the market here still had very little AI on board, if any at all. Take the lawn mowing robot, for example: there, the first solutions were such that the robot drove up to a signal wire, then performed a random rotation and continued driving until it arrived at the signal wire again. With this the job can be done, but this is pure heuristics and there is no decision, planning, or similar on the system. The result is also high inefficiency. On today’s systems, however, market-ready AI processes have already been implemented, because these robots already create maps, make plans, and then travel along paths that they have planned beforehand.

Robots are also very present in research and development—in their own right, major advances are being made in many sub-disciplines of robotics. Currently, it is very exciting to deal with AI and this can be seen in many new developments, which can be found in the technical literature but also throughout the Internet, although in the latter the borderline between true new advancements and faked information to yield a higher public attention is blurred. Therefore, it is always strongly recommended to take a closer look to understand what the advertised progress really is.

The capabilities and the degree of autonomy that a robot has, depends very much on two factors: the intelligence of the design of the robot and the intelligence level that the algorithms provide. Much progress in the capabilities of algorithms has been made in the last years, mostly driven by the fact that increasingly complex (and deep) neural network classifiers could be constructed using most recent advances in computing hard- and software. Whenever these networks had access to huge amounts of examples, they could find patterns in the data that enabled them to classify new examples with a very high success rate. The public breakthrough here was the AlphaGo algorithm, which used deep neural networks and beat professional human Go players. In a prominent study published in 2017 (Silver et al., 2017), the authors were able to show that one can find other interesting properties in the AlphaGo algorithm: the algorithm was studied again and trained in different ways. Two examples can be mentioned here: In one case, the algorithm has been trained using data from human players and has learned to play the game based on these moves. In the other case, the algorithm was trained with a reinforcement learning algorithm that needed a bit more training time to achieve the same quality. The latter did not use rules, but the program received feedback on the completed moves in the form of a reward function. The interesting thing here is that this algorithm never saw a human player move before and learned to play the game purely based on the reward function. Looking at how well these algorithms predict the play of a human player, it was shown that the reinforcement learning algorithm can actually predict this function only with progressive training time on human player moves, although it was already able to play with comparable or better performance than a human before.

This means that today’s AI procedures can develop their own strategies without any expert knowledge having been explicitly programmed in there, and not even the expert knowledge has been added via training examples. The complexity of the processes, for example, by building artificial neural networks with many layers, enables the systems to achieve the same performance as a human through trial and error, as in the example of the Go game. The advances in these algorithms have motivated major IT hardware companies, like Intel or NVIDIA, to develop specific boards as platforms for neural networks. E.g., NVIDIA used the popular domain of autonomous driving to demonstrate very catchily in the same year as the study from Silver et al. that one can already control vehicles with these artificial deep neural networks in many situations.Footnote 3

These are first steps that show that technical solutions exist in rudiments that allow AI and robotics to move in our environment. They are impressive examples, but at the same time, there are many issues to be resolved before we can deploy these technologies. We also need to look at many factors when assessing the maturity of the technology, such as the extent to which algorithms can be deceived or manipulated. Staying with the example of autonomous driving, as with human drivers, errors will always occur with technical systems. This is also due to the environment in which decisions sometimes have to be made despite impaired vision (or technically ambiguous sensor data). Since the decision-making basis of an artificial neural network today is in the network itself (i.e., in the connection strength of individual neurons), the transparency of the AI is naturally lost as a result. Given the complexity of today’s networks, this information is not easy to extract—but that is exactly what should happen. It is therefore very important in current research not only to enable systems to perform very complex actions, but also to develop mechanisms that make it possible to understand why an algorithm has made certain decisions and on what basis. The answers to these and other questions are already the subject of current research and will become even more important for the use of AI and robotics in the future.

Autonomy Helps When Uncertainty Is High: Requirements and Applications from Harsh Environments

When a robot should perform a mission in an unknown environment, e.g., in the context of space exploration on the moon or even on Mars, it will get into situations where standard procedures will not work. Then the robot either has to wait for external input (i.e., a human steering the robot) or, equipped with a certain level of intelligence, the robot could use the own sensor data and evaluate the available set of actions in order to choose an appropriate solution to solve the task and not violate any constraints. If the latter would actually happen, we would speak of an autonomous system (within a specified range or set of actions), which would be able to handle a certain level of complex situations. To achieve this, robots would need to have the capability too generally be able to sense and interpret their environment, and thus make plans for how to act and/or move in that environment. On top of this ability would ideally come capabilities that would qualify robots for a natural interaction with humans, be it through communication with a human located somewhere else (e.g., robot on the moon, human on the earth) or that humans are working directly together with robots on-site for a certain task. Robots then need to have capabilities for speech recognition, understanding, and speech generation. In addition, the ability to learn is important as well, so that the robots can improve in their performance—for this they must be able to evaluate their own actions and learn from mistakes. Therefore, this is ultimately the idea of the robot in the future: it is no longer purely about automating processes, but about systems that basically move in their environment with an ability to make their own decisions and interact flexibly with it, as well as with other robots or humans.

The autonomy capabilities discussed above can be very useful for robots exploring the solar system and probably also exploiting extraterrestrial resources. In this regard, robots can perform tasks that play a major role in extraterrestrial missions here in the future by developing various new features. These include exploring surfaces, searching for life, understanding how the solar system was formed, and also finding new resources. For the robots, this means they must be able to reliably sample with high robustness, explore, perform analysis, and then also return to stations where they can upload and share their data. Robots can also be used for longer human stays on extraterrestrial surfaces, and they will also play a strong role in the future for work directly with or near humans. In particular, they can be used to mine and utilize resources directly on site, e.g., not to transport all construction materials to other planets, as this would drastically increase mission costs. Instead, resources can be used on site, robots can help or provide the construction of extraterrestrial structures, as well as assembly and also maintenance work of these infrastructures. For all these tasks, autonomy is very important—in the following, we use examples from three potential targets for space missions and their specific characteristics: the Moon, Mars, and Jupiter’s Moon Europa. These examples will be used to show what requirements for robotics are important and will play a role in the future.

On the earth’s satellite, the moon, there are craters in polar regions that can be explored and used, and there are caves that may also offer possibilities as habitats and could be of importance for the establishment of a moon base. This idea could be approached and perhaps realized with autonomous robots. To make the moon usable by space travel and in turn to use it as a stopover for further space missions has long been a dream of mankind. For missions on the moon, robots are primarily a way to clearly keep the costs of realizing this dream under control and to keep the infrastructures technically functional even without the presence of humans. On the next destination, Mars, there are many more hurdles for space missions: flights to Mars take longer than a year, communication has such high hurdles that controlling a complex operation becomes almost impossible and takes an enormous amount of time. On Mars, there is also the exploration of the surfaces, the mapping, sampling, and search for information on the formation up to the search for life as the first use case, which is already operated by the first systems. These systems have partly autonomous functions, but they are not autonomous even in their exploration movements, but completely controlled. There are craters on Mars in whose sediment layers water or ice is suspected under certain circumstances. In addition, there are regions on Mars, such as the Valles Marineris valley system, which seem to be interesting for building infrastructures there as well and possibly then being able to establish a base or infrastructure on Mars in the distant future. Here, too, autonomous robots can be used to maintain such infrastructures.

A very special example of requirements for autonomous systems is provided by the even more distant moon of Jupiter, Europa. Here, under a thick layer of ice, an ocean of up to 100 km depth is suspected. To explore its seafloor in search of extraterrestrial life, autonomous underwater robots are ultimately needed that, after landing a probe and subsequently penetrating the ice layer, are able to carry out autonomous exploration missions with little energy consumption and can deliver the data back to the probe accordingly. As a study for such a mission, the Leng robotFootnote 4 was developed together with other mission components (Hildebrandt et al., 2013). The robot is shaped to fit into a possible ice drill, navigates autonomously, and is capable of diving passively (without energy consumption) to then actively explore on the seafloor. Upon return, the robot can perform autonomous docking for data transfer (see Fig. 4.5).

Fig. 4.5
A photograph of a docking experiment in the Leng robot maritime exploration hall used to explore the Europa moon of Jupiter. 2 insets depict a camera image, and an image of a robotic system on the Europa moon, respectively.

Docking experiment in the Leng robot maritime exploration hall for exploration of Jupiter’s moon Europa. Camera image bottom left, rendering of a Europa moon probe bottom right. (Source: DFKI GmbH)

Just like in the space domain, robots operating in the deep-sea need AI desperately for autonomous operations, since communication is very difficult and unforeseen occurrences (like changes in currents) are likely (for a comprehensive overview on challenges and technologies, see Kirchner et al. (2020)).

Autonomy: Insights from Field Tests

A good illustration of the current state of the art for autonomous robots is to look at the setup, results, and tasks from field tests, especially in the space domain. Here, multinational teams of research institutions and companies come together to test, evaluate, and at best fulfill a given mission scenario. Such field tests also show how the interaction of all components works, i.e., in most cases how mobility, manipulation, and also navigation capabilities work together to achieve the specific goal. One example is the exploration of lava caves on the island of Tenerife, as an analogue environment for corresponding caves on the Moon or Mars (Schwendner et al., 2015), as illustrated in Fig. 4.6. The robots explored these caves, and multiple systems also used a common representation of this environment and mapped it further. The robots themselves generated landmarks to orient themselves. As exploration has progressed, simulation of next steps has taken place directly on the system to verify them. Thus, the robots have been autonomous in the caves, planning their action, simulating it, then executing it, and mapping the caves accordingly on their own. Here, the robot has a high mobility by design and the capability for navigation in the caves. Still, many capabilities are missing, if troubles would be encountered, e.g., if the way back would be blocked somehow or sensors would fail or be wrong. Such kind of self-monitoring and also reasoning about the current status is still not realized in systems qualified for such field tests.

Fig. 4.6
A set of 3 images. 1, An illustration explains the navigation in craters and caves. 2, An image of a lava cave explored by a robot. 3, A screenshot of the various information collected by a robot.

Analog mission: Exploration of caves on the island of Tenerife—the robot captures its environment, plans and simulates the next steps before final execution. (Source: DFKI GmbH)

Another scenario in the field is exploration as a team of robots with different morphologies and capabilities, e.g., a bigger supply robot in combination with a small scouting unit. Likewise, the scenario depicted in Fig. 4.7 is showing a field test performed in the desert of Utah in North America with the Sherpa TT robot, which carried various mission modules, and the Coyote III robot, which was equipped with a small arm to take samples and also explore (Sonsalla et al., 2017; Cordes et al., 2018). The two robots successfully completed their mission in a period of 6 weeks. Part of the test, in addition to pure cooperation within the robot team, was interaction with a human, who used an exoskeleton to teleoperate the Sherpa TT in particularly difficult situations. This type of field test brings the systems closer to the real conditions that will be used later and also provides the scientists with a whole range of experience in the appropriate use of the systems. Again, the robots could cooperate and solve the task, but also a well-designed interface for teleoperation was necessary (Planthaber et al., 2017). This illustrated that humans are in most processes inevitable giving their inputs and helping the robots out of situations where these are lost. Therefore, a cooperative task solving in a mixture of robots and humans (be it distant or on-site) is currently still one of the best approaches for complex missions with robots. As already mentioned in the beginning, the better the interaction capabilities of robots become (e.g., for reporting problems or errors), the more efficient will a task be handled.

Fig. 4.7
A set of 6 images depict the field test in Utah. 1, A man wearing a robotic exoskeleton. 2 and 3, sherpa T T and Coyote 3 with S I M A manipulation arm. 4 and 5, base camp with 5 electro mechanical interfaces and 3 P slash L items. 6, sherpa T T with D G P S module.

Elements from the field test in Utah—The robot team consists of the robot Sherpa (top center) and the small rover Coyote III (top right). In special situations, the systems are addressed via teleoperation supported by an exoskeleton (top left). (Source: DFKI GmbH)

Task Sharing Between Humans and Robots

In the future, it will not only be a matter of sending autonomous robots alone into space or to extraterrestrial planets to have them autonomously carry out missions there, but it will also be a matter of having robots act together with humans. This topic is not only relevant in space robotics, but also central to the further development applications for rehabilitation or production purposes (e.g., in Industry 4.0), being areas in the Delphi survey where respondents were more skeptical with integration of robots. One immediate application for robots in space would be to perform on-orbit servicing, for example, to remove space debris from orbit, or to perform maintenance and support work on satellites or the International Space Station (ISS).

Task sharing could occur on very different interaction levels with the extremes of teleoperation on one side and full autonomy on the other. In the simplest case, a robot can be controlled directly, which then does nothing independently, but basically carries out the actions specified by the human. The more immersive the teleoperation is, the better is the human situated in the situation of the robot and the better can the human react as if being the robot. In addition to pure teleoperation, humans can order commands to robots, which are then executed. These commands can occur on subtask level (“drive straight”) or even include objects in the scenario (“drive to the door”) and the granularity depends on what the robot is capable of understanding about its environment and the won capabilities. Typically, such commands are elicited by explicit forms of interaction, such as through speech and gestures, but implicit interaction interfaces are also possible, e.g., by directly recording data from humans using eye tracking, or muscle or neurophysiological data and integrating it into the interaction with a robot. Through this, information can already be collected in prediction of whether certain movements will be executed by the person, which can then be more quickly recorded by the system and also translated or supported. Also, via the evaluation of neurophysiological data it can be determined in principle whether an overload of the human being is currently present and thus under certain circumstances information is available which has not yet been perceived and processed by the human being, or vice versa: perceived but currently classified as unimportant. When developing robotic systems for direct interaction, it is most important to build systems that are very compliant and thus largely harmless and safe to humans.

An important, overriding topic in the interaction of humans and robots, which is, however, still far away from real use in space missions, is the formation of the so-called hybrid teams of humans and robots (see Fig. 4.8). This involves close cooperation between humans, robots, and also virtual agents or other AI systems in a team structure (Schwartz et al., 2016). The robot continues to be an assistant for the human, but it should behave so independently that it is also perceived by the human as a team partner. This means that a robot can independently take over and complete work without having to be given complete instructions. Work in hybrid teams is supported by planning algorithms in the background. Technologies must also be developed and integrated that are robustly capable of recognizing human intuitions and making them available digitally. Digital agents, in turn, which are available to humans via voice input, help to provide humans with direct information from the digital representation.

For a team of humans and robots, a functioning interaction with each other applies in all cases, e.g. autonomous handovers of workpieces must be successfully carried out with each other and also negotiated. For example, when all members of a team are acting in a highly autonomous manner, such handoffs cannot simply be programmed in, but the systems need heuristics and protocols according to which they can negotiate and perform such handoffs autonomously. Then such teams could perform joint assembly or joint infrastructure construction on an extraterrestrial surface.

Fig. 4.8
A pair of images. 1, An illustration of a hybrid team of 6 involving humans and robots. 2, An image depicts recordings of an autonomous robot and robot interaction.

Example of a hybrid team with possible roles (left) and recordings of autonomous robot–robot interaction (right). (Source: DFKI GmbH)

4.4 Robots Supporting in Everyday Life

Today, robots are no longer exclusively found in factories. Robotic systems, or at least robot components, can already be found in everyday technical systems such as cars, tools, or home products. One growing target area of application are robots for everyday support and services: Robots should help to improve the quality of life and increasingly operate in contexts in which only humans previously acted. This applies to both the professional (e.g., in manufacturing companies) and the private sector (e.g., household). The motivation for this is to reduce physically strenuous activities, monotonous stresses, and strains. Even in view of demographic change—people want to live independently in their familiar surroundings as long as possible—robotic systems become increasingly relevant. However, it is also obvious that as soon as a complex robotic system is leaving a controlled environment—such as a production hall—challenges arise in terms of safe, economically, and efficient use, that can only be mastered in an interdisciplinary approach and must consider ethical, legal, and social implications beyond technical issues.

An ideal autonomous system for everyday life scenarios must be able to act independently, learn, solve complex tasks, and react to unpredictable events. Thus, to provide safe and meaningful support in everyday life, it is expected that human abilities and characteristics in various areas are transferable to the technical system. But safe movement over obstacles is only one part of the challenge. The reason for this is that people on the street, at home, in the supermarket or comparable everyday situations often move unpredictably. According to that, a domestic robot that takes over a variety of household tasks, such as tidying up, cleaning, and setting the table, must work very reliably and must have reliable sensors in order to damage something or—in the worst case, to hurt people. However, the safe everyday use of such multifunctional and complex systems is still a future scenario. The effort and costs for a step into everyday life use is currently a too strong barrier in relation to the benefits. The previously presented results from the population survey show that this is also part of the public perspective. Only every third to fourth person surveyed considers robotic systems and AI to be reliable and error-free systems at the present time. On the other hand, market figures from the HEMIX (Home Electronics Market Index), a joint project of gfu and GfK,Footnote 5 show that consumers in Germany are increasingly counting on robots to help with household tasks. Around 620.000 household robots were sold in Germany in the first half of 2021, an increase of 6%. This relates to vacuum cleaning robots, lawn mowing robots, and window cleaning robots. Therefore, at least for special applications, the everyday use of robots is already practicable today. As the exploration of lava caves on the island of Tenerife shows, the complex navigation capabilities of robot systems are one of the basic skills for autonomous robots in harsh environments. This also applies to domestic robots. Today, for example, vacuum cleaner robots map their surroundings instead of driving randomly through an apartment. They are equipped with cameras and object recognition and thus perform their tasks much better and more reliably than just a few years ago. Furthermore, such systems are becoming more and more affordable.

Moreover, in other areas of application, such as care, it is not to be expected soon that humanoid robots with a wide range of capabilities will be used, but rather learning assistance systems specialized for a specific task. The systems used in rehabilitation medicine can be divided into different application areas. On the one hand, systems are designed that are used for the motor recovery of patients and, on the other hand, robotic assistance systems are designed to support the everyday actions of affected patients and to assist nursing care tasks. These include, for example, intelligent wheelchairs with robotic gripping aids or service robots. Another group is social robotics, which is used for entertainment or to simulate closeness to living beings.

The use of intelligent assistance systems is intended to relieve the burden on nursing staff and at the same time helping care recipients to become more independent. Systems are designed, e.g., to support caregivers and patients in everyday, physically demanding care activities on the nursing bed (Hawes et al., 2017).

For this purpose, for example, an adaptive and multifunctional motorized bed with a robotic arm system for use in care is being developed.Footnote 6 Sensor components are used to be able to adjust the bed position depending on the situation. Various holding and support functions of the robot arm are indented, for example, for bed-wheelchair transfer. The system is also intended to continuously monitor the posture of the nurses during the mobilization or transfer of care recipients and to provide guidance on optimization in the event of unfavorable loads. A partially automated bed-robot arm system can improve the autonomy and quality of life of care recipients. For carers, robotic support for lifting and moving a patient can represent a significant reduction in physical stress. This prevents damage or diseases of the lower back area.

Efforts to integrate robotic systems into care are also based on expanding therapeutic options, enabling patients to do more of their own training and relieving the burden on therapists. For example, intelligent exoskeletons are being designed and used for robotic rehabilitation of neurological disorders.

As a robotic system, the exoskeleton represents, in simple terms, an external support structure which is directly connected to the human body and is as an active system equipped with actuators and sensors. This results in a wide range of interaction possibilities in the context of rehabilitation between the exoskeleton and the human users of the system. An exoskeleton usually has several contact points to the human body. This specific structure makes it possible to guide and stabilize the patient’s arm at each joint and to implement a high number of active degrees of freedom to realize finely coordinated movement patterns. The active stabilization of the limb by the exoskeleton enables the compensation of the inherent weight of the system and the weight of the limb and allows training under the exclusion of “gravity,” as well as the passive movement guidance of the limb even without the patient’s own effort, if necessary (Kumar et al., 2019). The aim is to create in an intelligent way synergies between man and machine to optimize processes and the workflow of rehabilitation, as well as to provide patients and therapists with advanced and innovative therapy options on the basis of this new technology.

In summary, in contrast to classical industrial robots, where the operating conditions can be controlled very well, robots in everyday human life must be able to adapt to the constantly changing environment. This implies high demands on the hardware and the software and results in a high complexity of intelligent robot systems. The high complexity results among others from the dependencies between the individual components. One example of this is the number of degrees of freedom and sensors, as well as their arrangement and the number of incoming data/information in interaction with the software and control components. Therefore, it is expected that we see in near future more semi-autonomous systems in everyday use, which can carry out low-threshold functions independently such as independently driving around obstacles or avoiding collisions when handing over objects. For the time being, complex decisions and activities will still be left to humans. It is also to be expected that initially specialized systems will find their way into everyday life, rather than generalized assistance robotics.

4.5 Competence for Autonomy

The applications described in the previous sections made already clear that full autonomy including informed decisions in an unknown and typically dynamic environment is currently hard to achieve—if not impossible—for a robot. Key components for autonomy are the knowledge of the own capabilities and the validation of taken actions with respect to the task, the environment, and the current situation. A fully autonomous system would have to know these parameters dynamically, while having the ability to respond to new and unforeseen events at any time. Instead of concentrating only on the final stage of full autonomy, certain levels of autonomy have been defined, e.g., in the car industry, to classify existing systems with respect to the required input from a human. A closer look at this approach reveals two problems: First, the step between the last but one level and the final level of full autonomy is in reality a big step including a mandatory self-awareness of the systems which is currently not achieved. Second, a system behaving in a natural environment may perform different tasks in different situations and may therefore request assistance in situation A while running fully autonomous in situation B. Due to this, it is more appropriate to not classify systems as fully autonomous or not, but to rather look at the functionality of the system with respect to the task to judge whether the system can fulfill the task autonomously or not. In their framework paper on robot autonomy levels in human–robot interaction (HRI), Beer et al. (2014) render this general conception of autonomy asking five central questions (they denote as guidelines):

  1. 1.

    What task is the robot going to perform? Here a classification of the relevant variables is made.

  2. 2.

    What aspects of the task should the robot perform? Here, subtasks are defined.

  3. 3.

    To what extent can the robot perform those aspects? Here, the amount of required human intervention is classified.

  4. 4.

    What level can the robot’s autonomy be categorized? This typically responds to most autonomy classifications elsewhere ranging from full teleoperation over shared-control to full autonomy.

  5. 5.

    How might autonomy influence the HRI variables? Here, it is questioned to what extent the robot might be influenced (e.g., in learning), how the human might be influenced (e.g., in trust) and how the social relation between the two might change.

These questions illustrate that determining the right level of autonomy is depending on many factors which can also change over time (for an extensive discussion, see Beyerer et al. (2021)). The needed level of autonomy is depending on environment and type of task—this contrasts with capabilities of the system in combination with legal and ethical guidelines. A possible workflow how a task could be treated by an autonomous system is illustrated in Fig. 4.9, showing how complicated this process can get when problems occur. Repeatedly the system has to analyze its own state with respect to the task and the environment and compare this with execution criteria. In other words, the someone or the system itself has to evaluate its competence to handle the situation appropriately.

Fig. 4.9
An illustration of the stages, problems, and capabilities to recognize the problem of a task by an autonomous system. The task involves 7 stages.

Possible workflow for task execution of an autonomous system (after Beyerer et al. (2021), with permission of Plattform Lernende Systeme)

Fig. 4.10
An illustration of the competence analysis for autonomy. It has 9 modes of execution and 4 levels of abstraction H M I and competence analysis with 3 situations.

A model for autonomy based on the dependency on competence (after Beyerer et al. (2021), with permission from Plattform Lernende Systeme)

Therefore, a central issue for an autonomous system is the issue of competence and the limitations of the system. In each situation, one could ask the question: Does a given system have the competence to perform the task or not? Nowadays in nearly all situation we—the humans—judge about the competence of a robot or a machine, or—in case of other humans—we look at qualifications to estimate a competence. E.g., in a space mission, it is clearly ruled what the robot is allowed to perform on its own and where teleoperation is applied.

Now, if people think of robots (in particular in harsh environments), they often think of highly autonomous systems, i.e. of systems that can perform most of the tasks on their own. As has been outlined above, this means the occurrence of uncertainties in a complex environment that the robot has to deal with. A successful accomplishment of tasks or missions in such situations requires that the robot can judge whether it can handle the situation on its own or if assistance is needed (from a human or another system). This judgement is a judgement of competence—and the systematic analysis of its own competence is hard to achieve for a robot, since no general formula is known and several areas of knowledge are required to be taken into account, where each area alone is a field of currently ongoing research: required and available capabilities, possible options for actions, and constraints to act (e.g., of legal or ethical nature). It is shown in Fig. 4.10 that while the mode of execution with respect to autonomy can be illustrated in a direct relationship, the judgement of competence for autonomy is a function which is depending on the values and the weights of the above-mentioned factors. It is therefore not straightforward to derive competence from one of these factors alone: A system can have few capabilities, but since it may have many options to act and nearly no further constraints, it might have enough competence to perform the task autonomously (green line in Fig. 4.10). Since such models are not existing in a complete form, today, this analysis is still typically done by qualified humans if robots should perform autonomous tasks in an unknown and/or dynamic environment. Alternatively, the complexity and power of the robot is reduced, so that more simplified systems (like a vacuum cleaning robot) perform only few well-defined tasks automatically without the danger of causing any harm to humans and the environment due to power and safety procedures. However, these robots do not establish trust by humans due to their sophisticated autonomy, but rather through their simplicity. This might be a reason, why people find it hard to imagine, how a more flexible and general autonomous robot would look like, and how communication would take place with such a system.

4.6 Conclusions: Establishing Trust Between Humans and Robots

Autonomy of machines is an old vision of humans, and the imagination how this might look like has been visualized and devised in drawings, animations, books, and films. Currently, we are crossing a border to really see robots and cars move and operate in our environment without direct human intervention, but what we see today still has many drawbacks and large discrepancies exist between today’s reality and the stories and pictures in our minds. Many tasks that seem without effort for human beings are still impossible for robots, and still not understood by humans. Therefore, the underlying complexity is extremely high and research on AI and robotics often involves hardware/software co-design and not separated developments. Hardware is developed that must be controlled and thus co-defines the behavior of the systems, i.e., new hardware also means new possibilities in behavior. Often challenges also arise from multimodal sensor streams, which often have to be processed adaptively. Values in these sensors need to be identified and classified, because not everything the sensors pick up is important, but the important features relevant to the intended behavior need to be found. Robotics is also about planning, re-planning, executing, and adapting motion and action. Now, the more complex the system should behave, the more complex hardware and software will get, with, e.g., more and more actuators that ultimately all have to be controlled to trigger a behavior, as well as very high, partly parallel data streams, which have to be processed, possibly stored, and integrated. This must be aligned with various software levels working together up to a certain point that humans would classify as goal-directed behavior. It is because of this complexity level that no one really oversees how long it will take to really cross the border to have autonomous systems around and how human societies might change with such new technological advances.

The results of the study have indicated that the public view on robotics is generally positive, while at the same time people tend to favor robotic systems much stronger in application fields where no humans are nearby (e.g., in harsh environments) and not in fields where robots will directly act together with humans or also on humans (e.g., in the care domain). This shows that the greatest challenge is the still widespread lack of trust and the acceptance of people toward robotic systems—especially systems that occur in everyday life. It directly relates to our everyday experience that technical systems may fail in a systematic way without any visible explanation with—in terms of powerful systems—possibly severe consequences. Today’s robots do not have sufficient capability to understand the context and relate this to the own set of available actions in the particular situation, and to give appropriate feedback and possibly also explanations to the human, e.g. if failure is occurring (which is always possible).

It is therefore worthwhile to take a look at the current research in domains where the autonomy of the robot is a crucial question for its successful application. Typically, these are harsh environments where humans cannot go or only under high efforts and taking large risks. The most prominent example in this chapter here is the important role of autonomous robots for future space missions in several scenarios. These scenarios require capabilities for the autonomous exploration of the extraterrestrial surfaces, also in a team of several robots, the construction of infrastructure, and the direct interaction of humans and robots, for example, via telemanipulation or also via concepts in which robots and humans interact with each other in a kind of team and carry out missions together.

For robots in terrestrial scenarios, similar questions regarding capabilities and autonomy have to be addressed. Examples exist from underwater robotics, where humans are still far away, up to industrial robotics where humans can in principle even share the workspace with the robot. And if the workspace is shared, many other fields of application exist as well, which can benefit from the development of the technologies and in turn also provide new impetus for space travel. Examples for applications with direct contact to humans would be the use of robots in rescue missions or robotic technology for rehabilitation, e.g., after stroke. For the latter, for example, parts of the exoskeleton technology can be used as intelligent robots built around humans to support the rehabilitation process. Even other domains that receive much attention at the moment, like the question of autonomous driving and new mobility concepts are also emerging as a result of the technologies discussed here.

It remains a major challenge to develop autonomous robots that are capable of relating task and context to the competence of own actions and ideally directly learn from the choices taken. This would be one technological basis to realize the vision of robots autonomously working together with humans. On top, it requires advances in safety and the transparency on decisions in order to establish trust with the humans—probably declared by elaborated certification mechanisms. Everywhere, where this is not (yet) possible, robotic systems will be limited in function and flexibility.

Robotics is thus a very interdisciplinary field. The combination of engineering sciences and computer science alone is not sufficient; other sciences must also be involved. The more you use mechanisms with high internal complexity, such as deep neural learning, the more you also need methods from other sciences, such as neuroscience, to develop methods for making systems transparent. Overall, it is in many cases a matter of dealing with increasing complexity, and that for systems that are supposed to be endowed with long-term autonomy. To enable them to operate on the moon or Mars, for example, the robots must function robustly and safely over a long period of time.