Keywords

12.1 Introduction

The premise underlying this book was that many societies are currently undergoing a process of social and technological transformation. More specifically, we take as our point of departure that there is a significant increase in data harvested from both humans and technology, alongside increasing capabilities and ambitions to utilise these data in developing algorithms for the automation of work, machine learning and a new generation of AI [1, 2]. These developments also take place within high-risk industries such as healthcare (Sujan) and offshore drilling (Paltrinieri), in addition to cyber-space which is becoming a source of societal vulnerability and transnational concern (Backman).

The call for contributions described these technological trends and posed questions regarding their implications for work, organisations, businesses and regulation. In this respect, it is no surprise to find sociotechnical challenges as recurring themes in the chapters of this volume. Nevertheless, it is striking how the chapters touch upon similar issues regarding the way digital technology produces inscriptions for social life and vice versa, despite the chapters starting from very different perspectives, methods and cases of study. This provides weight to a claim that a strictly technology-centric view runs the risk of misrepresenting both the challenges and opportunities involved in introducing a wide variety of digital tools. The same goes for a strictly human-centric view, given that high-risk systems are usually high-technology systems with a high level of automation. Exploring safety issues in a digital age thus involves a sociotechnical perspective, implying that a sociotechnical lens has something important to offer. A remaining question is, however, what does it mean to adopt a sociotechnical perspective? Before discussing the recurring themes of the book, this question needs a brief consideration.

12.1.1 What is a “Sociotechnical Perspective”?

The literature on sociotechnical systems dates back to research on work design and organisation development at the Tavistock Institute in the 1950s, and subsequent action research projects in Britain, Norway, Australia and the USA over the following decades [3].

The core idea of the sociotechnical approach is that the technical and social systems of work organisations need to be seen in close relation, and hence, should not be designed and developed in separation. From a sociotechnical perspective, the effectiveness of work systems emerges from the match between the requirements of the social and technological systems, often described as consisting of four broad classes of variables: structure, people, technology and tasks [4]. There will not be “one best way” to design a sociotechnical system, but specific analyses need to be undertaken to find ways to organise activities that are tailored to the properties of the technology involved, while also addressing workers’ needs for, e.g., autonomy, task variation or interpersonal interaction.

While the specific methods and techniques of sociotechnical improvement are not widely used today, their spirit, the expression itself and the general logic of a sociotechnical approach are very much present in fields like ergonomics, human–machine interface design and cognitive system engineering. What constitutes a sociotechnical perspective can also be argued to have changed over the years. In its origins, it referred to the alignment or joint optimisation of a social and technical system. For instance, within the literature on sociomateriality, it is more common to treat the relationship between the social and material (including technology) as a matter of entanglement, thereby pointing to a need to understand the way one involves inscriptions in the other. Similar arguments have been made with reference to high-risk systems, e.g., Le Coze [5] referring to the term “sociotechnical” as an idea that it is virtually impossible to distinguish the technological from the “non”-technological when it comes to understanding high-risk systems. Approaching one without a relationship to the other is likely to be misleading if the objective is to analyse risk, whether it is through proactive risk assessments or accident investigations (ibid.).

Haavik [6] provides further insight into this perspective on sociotechnical analysis. He argued that the classical organisational perspectives on safety tend to “treat sociotechnical systems as complex systems made up of factors belonging to the well-defined realms of humans, technologies and organizations” [6]. Inspired by Latour [7, 8], Haavik presents empirical case studies illustrating two arguments: 1) the system components (e.g., technical systems) can gain their properties from their relations to other components, and 2) technical systems are boundless in the sense that they are not easily demarcated, neither from social components nor from other technical systems. In this perspective, it is more relevant to explain sociotechnical systems as relationships between heterogenous actors, and that “the properties of the actors are results of the relations, not vice versa” [6]. In this perspective, assessing “the social” and “the technical” components of safety will be, at best, half of the process of a sociotechnical analysis. The key to understanding the system dynamics that are involved in the production of unwanted outcomes (how the system “works” in different situational contexts) lies in understanding how system components may influence and shape each other.

The chapters in this volume indicate that a sociotechnical perspective on safety is probably more relevant than ever. They also illustrate that a sociotechnical view should not be restricted to the initial scope described by its pioneers like Trist [3]. It should encompass a wider spectrum of empirical scrutiny and theoretical reach as promoted, first, by broad, or multilevel analysis of situations and second, by a greater emphasis on the digitally mediated practices considering their pervasiveness across activities in safety–critical systems. These two options consist in respectively, considering a wider range of actors (and institutions) than the micro- or mesoframing of Trist allows, and second, granting technology a higher level of agency and power in shaping social realities than so far introduced. The pace and pervasiveness of technological innovation is not only increasing, but has developed far beyond the question of alignment between a social and a technological subsystem in organisations. Digital technology not only mediates passive representations of reality, but also takes on roles in production processes and work environments by performing tasks, distributing work, making interpretations of current situations, predictions of future situations and providing both advice and decision making. As such, digital technology actively shapes human perception and activity, and its integration into human activities goes beyond the traditional conception of a mere tool. With this perspective on sociotechnical systems, we now turn to examples of more specific research challenges related to safety in a digital age.

12.2 Sociotechnical Challenges

12.2.1 Where is “The System”? The Migration of Risk

Several of the chapters indicate that digitalisation involves changes in the type of actors that provide input that is in one way or another critical to the real-time reliability of systems: for making decisions and adjustments in normal operations, detecting anomalies and weak signals of danger, dealing with disturbances and crises, and for restoration of system operations after failure. Importantly, features that are added to make a system work in new ways, also mean that it can fail in new ways, involving new actors in both successes and failures. The following are examples drawn from the chapters in this volume:

  • If there is growth in modelling and simulation science as a form of generic meta-science, then the properties of input data and the assumptions of model-makers and analysts become more critical, as illustrated by Demortain.

  • If software becomes more critical, then the practices of software developers become critical, including their navigation in the four interrelated trade-offs described by Roe and Fortmann-Roe.

  • If information security can be compromised by the actions of administrative support staff, then the information security culture and behaviours of this staff will also matter for the overall integrity of the system (Nævestad et al.).

  • If wearable technology and other IoT devices do in fact gather personal data, the recipients and processors of such data become potential actors in, e.g., the organisation of safety–critical work and accident investigations (Caron; Guillaume).

  • If digitalisation does indeed introduce new ways of failing through tighter couplings and increased complexity, this can give rise to a new species of crises, as argued by Backmann.

These examples point to a fundamental question for safety research: Where do we draw the lines around “the system” we study when we aim to describe, analyse and ultimately improve conditions for safety? A wide variety of extra- and intraorganisational actors, e.g., software engineers, computer scientists, model makers, HR staff, all seem to be part of the sociotechnical challenge, but do we really account for them as part of the high-risk system? When posing such questions, complexity becomes not only a word, referring to the number of system components and the interaction between them, but a multilevel phenomenon in need of interpretation. Moreover, it requires understanding system relations, in addition to the properties of each system component. For instance, the information security culture and behaviour of administrative staff is not safety–critical in itself—its criticality depends on its relations to other sociotechnical elements that can be more directly related to an unwanted outcome.

One way of approaching such questions is by framing them as a matter of migration of risk in systems that can be both polycentric and “borderless” [9]. While outsourcing relationships have been around for decades, digital value chains have a potential for becoming so long and involving such a heterogenous network of actors providing critical input that it becomes virtually impossible to draw the line between the inside and outside of the systems at risk [10]. Assessing and managing risk in such a landscape involves viewing a sociotechnical system not as a clearly defined and static entity, but rather as a changing network of human and technological actors.

In such a conceptualisation, it becomes increasingly hard to maintain the traditional division between the “sharp” and “blunt” end of industrial organisations. In a digitalised sociotechnical system, professional communities can both monitor and operate technical systems without being in the vicinity of the physical production processes. While this is by no means a new problem for safety research, knowledge of the operational context in which, e.g., software is entangled, becomes a matter of increased importance. Moreover, reconsidering the division between the sharp and blunt ends of organisations may imply reconsidering the system’s control strategies. For instance, it might involve a form of drift along the centralisation/decentralisation axis that is key to both Normal Accident Theory and HRO research. Digitalisation can make a system more decentralised by bringing in new roles taking on responsibilities as “reliability professionals” involved in maintaining system states and recognising and interpreting anomalies [11].

Addressing the issues described here will require a level of granularity in the analysis that enables both the identification and understanding of new and changing relations that are enabled by digitalisation. One research challenge for safety research in the digital age is thus one of “moving closer” to sociotechnical relationships in order to assess the specifics of such relationships.

12.2.2 The Relations of Rationalities

The chapters from Caron and Guillaume illustrate the classic issues of friction between technical administrative rationalisation processes, and the need for professional autonomy of both individual employees and a workers’ collective as a whole (e.g., [12, 13]). The desire to maintain an “intimate space” of privacy, both in physical and digital terms, is in many ways part of a power struggle intrinsic to the relationship between employers and employees. At the same time, having digital technology embedded on workers’ bodies (e.g., smart wearables) or tools (e.g., smart vehicles) opens new avenues for control in this relationship. This is an important aspect of the Industrial Internet of Things (I-IoT)—the connectedness of clothes, tools and machinery are the most recent and widespread versions of “smart machines” that gather and send data not only about themselves, but about their users and their work [14]. In this way, the embeddedness of technology in the social is not only a matter of technology-technology or human–technology relationships—it exerts power over human–human relationships and is thus a powerful influence in the social sphere of organisations.

Moreover, the introduction of such technology is sometimes intended to increase safety or security, as shown by Caron’s and Guillaume’s chapters, illustrating the dual nature of this technology: At the same time as it is aimed at protecting workers’ physical safety or security, it can be interpreted as an invasion of their privacy in the workplace [15]. This bears resemblance to the concept of securitisation (or safetyisation), where expansion of the power of already powerful actors becomes legitimate and justified in the name of safety. Whether such expansion of power is intentional or not, its future path of development is unpredictable. It can be seen as a conquering process, where a technological logic increasingly colonises the social sphere [16]. In this logic, workers are not only employees responsible for performing their tasks, they are also the sources of data fuelling algorithms that monitor and manage work. In addition to a potential for a general dehumanising of the social sphere of work, it contains implications for safety. If we accept the relevance of safety culture and the importance of employees being empowered to have a strong voice in concerns over safety, diminishing room for a workers’ collective and an upgrading of the power of big data analysis could raise serious concerns.

Our intention is not to paint a dystopian image of a future of work where humans are reduced to fuel for technology, and there is no technological determinism involved in this line of reasoning. However, as researchers in safety and risk, it is our role to highlight potentially negative future implications of choices and changes that are made today. The ability to do so depends on being both able and willing to zoom out from the specific empirical observations and ask “what is this a case of?”. While we are not calling for all safety research efforts to connect to more generalised macro-implications, we do believe that sociotechnical interconnectivity involves an integration of different forms of risk and an increased probability that what constitutes a solution within one risk framing can present problems in other frames. It might be that it will no longer be sufficient to conduct considerations of risk through specialised and compartmentalised approaches.

12.2.3 The Big Picture Versus Empirical Specificities (“Moving Closer” and “Zooming Out”)

As the previous section illustrates, discussions about safety in the digital age tend to mix “small” and specific empirical observations with a “big” picture containing macro-trends and potential futures. Somehow, these two levels of analysis seem to be closely connected. How do we as safety researchers deal with such differences in scale?

The overarching diagnoses of what is going on in the digital age, and what the future might look like, often refer to trends and extrapolations, where the use of different technologies in different contexts are subsumed under the same headings. Debates concerning privacy or AI, for instance, are in essence both ethical and principal and constitute dilemmas for the long-term evolution of societies and the values societies are based on. Here, the “sociotechnical” comes in the form of a macro-oriented, mutually constitutive relationship between technology and society.

At the same time as the future of privacy and the control of algorithms present macro-level, “wicked” problems, digitalisation presents an ongoing flow of concrete cases, with different technologies involved, in different sectors, under different regulations and with different risks involved. This calls for a more case-by-case-oriented study and management of safety, security and reliability in the digital age, as argued by Roe and Fortmann-Roe in this volume. While this involves moving closer to the short- and mid-term singularities of specific challenges, it does not mean a detachment from the discourses of the big picture zooming out on the long-term implications. On the contrary, the case-by-case approach is both an instantiation of challenges belonging to the big picture and an opportunity to inform the big picture with more nuance, differentiation and precision with regard to the stakes and opportunities involved.

This a matter of attaining sufficient granularity to be able to grasp the workings of concrete high-risk systems—their work processes, technical tools, precluded events, and sources of brittleness and resilience—while at the same time aiming to recast them as cases of larger and more principal issues. Importantly, this rather tall order of reframing of scale applies not only to safety researchers but also to technology developers.

We are not implying that company managers or computer scientists and software engineers are evil or malevolent. Neither do we expect them to obtain degrees in Science and Technology Studies to deal with potential unwanted side effects of technology such as illegitimate surveillance and breaches of privacy. What we should expect, however, are governance structures and educating systems supplying them with requirements and competence to perform responsible research and innovation. One way of doing this is to also zoom out from the details of the positive potential of specific technologies under development and critically examine the potential side effects of poor or malevolent use of the technology developed. This form of recasting would probably serve to shatter the myth of technology being objective and neutral once and for all.

12.3 Looking Forward

The introduction to this volume drew up a wide landscape of changes and challenges to provide a backdrop for the discussion of some of the pressing issues associated with digitalisation involving algorithms and machine learning for the safe performance of high-risk systems. Needless to say, this book covers only fragments of the debates, challenges and opportunities associated with safety in a digital world. Despite this, the old and new challenges that are identified through the chapters, illustrate the importance of recognising digitalisation as involving transformation processes that are of a genuine sociotechnical nature. In the digital age, the distinct separation of “the technical” and “the social” components of organisations becomes increasingly problematic. The intertwining of technological and human actors and processes makes it more relevant to explain sociotechnical systems by means of the relationships between heterogenous actors rather than as separate components.

Although the remaining questions remain numerous and challenging, the research community on safety and security has probably never been more relevant in addressing both the small-scale issues associated with particular systems, and the larger and fundamental implications for societal risk governance.