Abstract
With the introduction of the concept of Sustainable AI, considerations of the environmental impact of the technology have begun to enter AI ethics discussions. This, Aimee van Wynsberghe suggests, constitutes a new “third wave of AI ethics” which yet needs to be ushered in. In this paper, we ask what is entailed by Sustainable AI that should warrant such special accentuation. Do we find simply run-of-the-mill AI ethics applied to an environmental context? Or does Sustainable AI constitute a true a “game-changer”? We engage in a discussion about what the “waves of AI ethics” ought to mean and the criteria for labelling a wave as such. We argue that the third wave of AI ethics rests on a turn towards a structural approach for uncovering ethical issues on a broader scale, often paired with an analysis of power structures that prevent the uncovering of these issues.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
In 2017, Pieter Lemmens, Vincent Blok and Jochem Zwier called for the Terrestrial Turn in the philosophy of technology. With the rise of the Anthropocene, they contend, both technology’s planetary condition and conditioning have drastically changed: Technology must be placed within its planetary context and theorised as a planetary phenomenon—macro-level analyses and renewed ontological approaches are required. This is not a matter of applying an environmental perspective, the authors stress. It is a “true game-changer”, forcing philosophers of technology to fundamentally reconsider and re-evaluate technology, technological innovation, and progress [41].
Recognition that the current ecological crisis necessitates a planetary perspective on technology has also taken hold in discourse on the ethics of particular technologies. One such technology is Artificial Intelligence (AI). With the introduction of the concept of Sustainable AI, considerations of the environmental impact of the technology have begun to enter AI ethics discussions. This, Aimee van Wynsberghe suggests, constitutes a new, “third wave of AI ethics” [62] which yet needs to be ushered in.
The invocation of the term ‘wave’ is striking. It implies a new distinct phase within the discipline. The term thus suggests a major evolution, re-interpretation, or break in the global AI ethics debate. Sustainable AI must introduce a considerable shift rather than just an added consideration among many or another variant of the same argument.
The associations then, which the term ‘wave’ summons, conjure up grand expectations. Can Sustainable AI literature deliver? In this paper we ask what is entailed by Sustainable AI that should warrant such special accentuation. Do we find simply run-of-the-mill AI ethics applied to an environmental context? Or does Sustainable AI constitute a “true game-changer”? In other words, does (or could) Sustainable AI perform the Terrestrial Turn that Lemmens, Blok, and Zwier beseech?
We begin this paper by presenting the reader with the development of Sustainable AI as a subfield of AI ethics. We continue with an exploration of the landscape of AI ethics. We argue that three approaches, or waves, can be distinguished, with each consecutive one being a reaction to the former. We find that Sustainable AI, performing a structural turn, constitutes the third approach. This, we argue, often goes hand in hand with a higher-level analysis of power structures and interests served by previous AI ethics approaches. Finally, we show by example of AI bias and fairness literature that a structural turn can and has been performed in other subfields of AI ethics and beyond. We thus conclude by widening the notion of the third wave approach to include not only Sustainable AI, but all literature performing the structural turn we describe.
2 Sustainable AI in context
To understand the significance of Sustainable AI, it is necessary to consider it within the context of its emergence. The term itself has been popularised by Aimee van Wynsberghe in her paper of the same title, in which she distinguishes between AI for sustainability and the sustainability of AI [62]. In her assessment, too much focus has been put on using AI applications to achieve sustainability goals, while the environmental (or sustainability) impact of the AI applications themselves has been neglected in research as well as in public discourse. To be sure, most big tech companies have now founded departments within their organizations to address AI for Good or AI for the planet. Projects within these groups focus on how AI can be used to tackle the climate crisis and/or prediction of natural disasters.
Yet, with the growing number of publications, funding, and attention to the idea of using AI for sustainability there is little consensus on what AI for sustainability really means. In a more recent paper, Falk and van Wynsberghe tackle this question and suggest that to truly call a project or application “AI for Sustainability” one must address not only the application for a sustainable end but also minimize the environmental impact of the training and usage of said AI model [24].
First publications on the sustainability of AI have indeed found that the environmental impact is, and will continue to be, considerable. Among these are publications that gauge the carbon emissions produced by training and tuning AI models [22, 60], works that consider the wholescale environmental impact of AI hardware [7, 18, 19, 21], as well as papers that additionally address the social and economic dimension along the lines of the sustainable development concept [37, 47, 65]. Considering these impacts, researchers have approached the topic of AI and sustainability also on conceptual and ethical grounds [9, 15, 19, 21, 51, 62].
Consequently, the field of Sustainable AI is not only about drawing attention to the vast exploitation of people and planet in the early and end of life stages of AI; rather, it is about an understanding of how these concerns ought to shape our ethical focus on the issues of priority within the fields of AI ethics and governance.
2.1 Sustainable AI and the field of AI ethics
Sustainable AI has only recently developed as a subfield of AI ethics. And AI ethics, as a field, is by no means a monolith. Debates on a great variety of topics fall under its large umbrella. Ethical issues pertaining to more speculative concerns like artificial general intelligence and existential threat [46] or machine consciousness [48] are equally encapsulated by the label ‘AI ethics’ as are issues plaguing current AI systems on a technical level like reliability [12], and privacy concerns [59], or on a societal level like automated false-information-campaigns [29], surveillance versus self-determination [66], and agency and representation in the face of automated data gathering processes [17]. Academic and public debates on AI ethics are so varied and far-reaching that no attempt shall be made here to give a comprehensive account of AI ethics. In any case, if Sustainable AI is to constitute a novelty, it cannot simply be an addition to this long list of issues.
What is needed instead to assess the status of Sustainable AI within context is a systematic account of different approaches to AI ethics. Van Wynsberghe [62] herself offers a first hint as to how approaches to AI ethics have developed. She distinguishes three waves of AI ethics. The first wave, she contends, consists of publications addressing the deemed future capabilities of AI, like superintelligence, an approach which she considers an “ethics of fanciful scenarios of robot uprisings” [62, p. 213]. The second wave became more practical, addressing immediate concerns of machine learning (ML) techniques like the problem of explainability or bias in training data. The third wave then, as mentioned above, turns to the environmental impact of AI technologies. In this image of waves, approaches to AI ethics follow each other consecutively. A new wave is a reaction to the former.
As it stands, this division of AI ethics discourse into waves of publications with different core concerns does not yet paint a full picture of what approaches to AI ethics have been taken and how Sustainable AI fits into this narrative.
Some scholars have classified approaches to AI ethics in temporal terms. Ryan et al. [54] structure AI ethics research along the categories of short-, medium-, and long-term issues. Short-term issues are those that “can be expected to be successfully addressed in technical systems that are currently in operation or development” (p. 4). Medium-term issues “arise from the integration of AI techniques […] into larger socio-technical systems and contexts” (p. 5). Finally, long-term issues address “fundamental aspects of nature of reality, society, or humanity” (p. 7), referring to concerns connected to artificial general intelligence, superintelligence and the singularity.
We claim that sorting AI risks and/or AI ethics literature (in response to risks) along temporal dimensions is deceiving given that short-term risks may have long-term implications and long-term risks may have solutions or mitigation strategies in the short term. Thus, it seems impossible to classify risks according to any temporal dimension.
While a categorisation of AI ethics issues along the temporal dimension is undesirable, Ryan et al. corroborate and underpin van Wynsberghe’s assessment that three distinct approaches to AI ethics exist in the pertinent literature. What is more, there is some substantial overlap of their description of long- and short-term issues with van Wynsberghe’s first and second waves. First wave AI ethics, as are long-term issues, is concerned with the unforeseeable future consequences of more-than-human technology which are yet to arise and which, if they do, would affect humanity fundamentally. Second wave AI ethics focuses on ethical issues that appear at the technical level in the immediate or short-term and in engaging with particular technologies that actually exist.
There is, however, an apparent mismatch between the descriptions of medium-term issues and third wave AI ethics. In this paper, we intend to resolve this impression. We argue that the combination of both Ryan et al.’s and van Wynsberghe’s perspectives creates an interesting narrative which places Sustainable AI staunchly in the context of other developments in AI ethics and beyond. We will hence have to show how Sustainable AI relates to what Ryan et al. call medium-term issues.
For further argumentation, we note the following points: From both Ryan et al. and van Wynsberghe, we retain the descriptions of the first wave/long-term issues and the second wave/short-term issues. From van Wynsberghe, we retain the idea of waves of AI ethics, i.e. that waves are consecutive and that a subsequent approach to AI ethics is a reaction to the perceived shortcomings of a previous one.
3 Shifting waves
In this section, we integrate van Wynsberghe’s and Ryan et al.’s assessments into a more nuanced picture of the AI ethics landscape. Given our rejection towards classifying AI ethics issues according to a temporal dimension, we instead wish to argue that a shift, or a wave, in AI ethics can be identified and/or observed according to: (a) a change in the content of ethical issues being discussed and, (b) a principled departure from an earlier wave of AI ethics issues/scholars. The principle of departure defines the shift or wave. In the following, we propose a reading of AI ethics literature that finds a principled departure from first to second wave AI ethics and a still developing departure from second to third wave AI ethics.
3.1 From first to second wave AI ethics
As van Wynsberghe suggested, the first wave of AI ethics can be classified by its topical focus on abstract and non-existent technology. Often, this goes hand-in-hand with a dystopian tech-determinism. Concrete, current ethical concerns are not addressed, or, as its proponents may want to phrase it, are considered outweighed by the gravity of future possibilities. Early and, more importantly, influential contributions of this kind have been provided by the likes of Nick Bostrom [10] and Vernor Vinge [64], the latter of which popularized the idea of the ‘singularity’ in the context of artificial intelligence [49].
We suggest that as a reaction to this, coupled with an increase in real world applications of AI, ethicists shifted their attention to specific existing applications and/or methodologies for AI development (e.g., GOFAI, machine learning, neural networks, etc.). For example, the methodology of machine learning introduced a concern for transparency, or for the lack thereof. Ethicists warned that this lack of transparency amounts to a lack of accountability on the part of developers and implementers [43]. Consequently, AI developers have explored the possibility of explainability or explainable AI – investigating the possibility to uncover the rules developed by the algorithm used for pattern recognition [67]. Ethicists have, however, also cautioned against transparency and explainability (or any one ethical principle, for that matter) as an end in itself, at least when this comes without integration into a normatively robust axiology of values [26]. Others criticized the ways in which AI models are trained, i.e., through the use of historical data, and the concern that this reliance on historical data results in biases in the output of the model (resulting from cultural biases that manifest in said historical data) [53]. This less speculative approach to AI ethics can now be considered mainstream both in academic publications [61] and in AI policy documents [16].
The move to a second wave of AI ethics, we propose, was thus due to ethicists observing the development and use of AI in society and finding the first wave approach either irrelevant, insufficient or outright distracting. The second wave takes issue with the speculative nature of first wave approaches and in turn focusses on the risks currently posed by existing AI technology. Its objective is thus the concrete analysis of particular AI models.
It should be duly noted though that we do not want to claim this shift has necessarily been a conscious or explicit rejection of the first wave approach (although it may well have been). Second wave approaches do not necessarily have to address the shortcomings of first wave approaches in order to be classified as such. Instead, we understand the second wave of AI ethics as an expression of a substantially different attitude to where the relevant problems lie, i.e. a question of principle. Thus, we can still construe the shift from the first to the second wave of AI ethics as a reaction, namely as a reaction to a perceived lack of relevant discussion, without claiming close engagement between them. Nevertheless, that there has at least been some engagement recently is illustrated by the example of the following paragraph.
We also will not claim that first wave approaches have consequently disappeared. Rather, first and second wave approaches currently coexist in AI ethics literature. Where they engage with each other, we find expressions of the fundamental tensions between them. For example, recent statements on AI and risk from the Center for AI Safety (CAIS) and the Future of Life Institute have stressed “existential risks” to humanity, stating that: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war” [14]. To be sure, these statements have been heavily criticized for a variety of reasons, often in a manner reminiscent of second wave approaches. It has been suggested that “(a) the risks are speculative and uncertain, (b) these warnings divert attention from real short-term risks and harms, (c) these statements and letters are in reality strategic manipulation aimed at avoiding regulation, (d) they prevent us from exploiting the positive potential of AI, and (e) the signatories are just fuelling counterproductive AI hype” [58, p. 1].
Hence, emphasis continues to be placed on what might be referred to as long-term risks of AI, e.g. the extinction of the human race, despite mainstream approaches reflecting a second wave attitude. While the recent closure of the Future of Humanity Institute [28], previously hosted at the University of Oxford and arguably a hub for research on first wave AI ethics, could be taken as an indicator that the first wave is ebbing away, we ought to be careful with conclusive assessments. There still are quite a few research institutes and think tanks dedicated to first wave concerns like existential AI risk, including the Centre for the Study of Existential Risk hosted at the University of Cambridge and the London-based charity Centre on Long-Term Risk. The shift from one wave to another is therefore not to be read as total and should rather be understood as a description of a general tendency.
4 Third wave AI ethics: what, why, and how?
The shift from the first to the second wave of AI ethics has been a shift from the speculative to the concrete. Given the claim made by van Wynsberghe, that a new third wave of AI ethics is needed to account for the environmental costs associated with the development of AI systems, we ask what constitutes a shift to a new approach in AI ethics if it is not about a temporal dimension in terms of the risks presented by AI. At the same time, we ask whether Sustainable AI is part of such a wave or constitutes such a wave (as was asserted by van Wynsberghe [62]).
In the following, we argue that the second wave approach to AI ethics has relied on an isolationist perspective of AI models and that the shift to the third wave of AI ethics is in turn a reaction to this isolationist view. The third wave, as we will show, is a shift in how AI is conceptualised, from an isolationist to a structural approach. In this section, we first contend that the third wave of AI ethics should be construed as addressing all concerns raised by a structural perspective on technology. Second, the third wave of AI ethics should be understood as a reaction to second wave approaches, and third, the third wave is already underway.
4.1 A structural turn
Before broadening our argument to the entire third wave of AI ethics, we first want to argue that Sustainable AI, as part of that third wave, introduces a structural turn in reaction to second wave, isolationist AI ethics. We understand the term “structural” here in a loose, pre-theoretical sense to denote the idea that the appropriate level of analysis is not individual objects, but rather the structures in which these objects are embedded. We explicitly do not want to evoke any specific school of structuralism or post-structuralism. The sense (or rather senses) in which third wave AI ethics is structural will become sufficiently apparent in the following discussion.
We now take a closer look at literature on Sustainable AI. The mark of this literature is that it often positions itself as broadening the perspective on AI technologies to include global environmental, but also social and economic considerations uncovered when beginning from a perspective of environmental justice. Crucially, such analysis brings to the fore the infrastructure which constitutes AI. Concerns include the ecological and social impact of AI hardware infrastructure [7, 11, 18, 19, 62, 63], the dangers of infrastructural lock-in and AI dependency [21, 51], and the embeddedness of AI algorithms in their environment as an infrastructure in itself [9, 51, 55]. Conceptually, it has been argued that the sustainability of AI can only be grasped if AI artefacts are not construed as isolated, but rather as embedded in ecological and sociotechnical systems [9, 55, 57] (cf. [8]). Thus, “[i]t seems that sustainability may simply not happen at the artefact level” [8].
In a previous publication with Tijs Vandemeulebroucke [9], we conceptualised sustainability as a property of complex systems instead of singular artefacts, like AI algorithms. In reference to Crojethovich Martín and Rescia Perazzo [20], we defined ‘complex systems’ as those “composed of a great number of single elements (e.g., organisms, natural environments, technologies) and actors (e.g., individuals, organizations, industries, political institutions) in interaction, capable of exchanging information between each other and with their environment, and capable of adapting their internal structure in reaction to these interactions” [9, p. 9]. In the same paper, we defined ‘sustainability’ as “a measure to maintain the organization and the structure of a system with multiple pathways of evolution” [9, p. 9]. It follows from this that sustainability effects can only ever be found at the systemic level, not at the level of artefacts (alone). Indeed, one of the foundational documents of contemporary sustainability discourse, the Club of Rome’s Limits to Growth [44], has been conceived entirely from a systems perspective.
To illustrate the significance of this perspective, take the example of the electric vehicle. Electric cars are often considered a more sustainable alternative to cars with combustion engines. While they may lend themselves more easily to be integrated into a sustainable state of our (social, economic, ecological) systems, their design will not determine in isolation whether the technology contributes to that sustainable state. If electric cars are recharged with electricity produced from oil- or coal-fired power plants, they may produce equal or perhaps even more greenhouse gas emissions than conventional cars [50]. Moreover, the batteries required for the electric car to function require a range of rare earth minerals, lithium in particular. In countries where lithium is sourced (often the Global South), these mining practices result in exploitation of people and land. Miners are often working in slave-like conditions not to mention the detrimental effects that the mining practices have on the surrounding community, poisoning the water and land. And as the demand for electric vehicles grows, so, too, do the environmental justice issues surrounding the necessary elements in the procurement chain. The electric car will contribute to sustainability only in so far as the entire development chain of the car contributes to sustainability.
This is precisely why, when talking about sustainability, we refer to a property of large-scale systems, not a property of the technology or object under consideration. The attribute ‘sustainable’, if it should be attached to objects at all, must be understood as relative: An object can only ever be sustainable relative to the system of which it forms a part.
The same principle applies to AI models. Now, of course, AI models themselves can be considered complex systems according to the definition just offered. But they are certainly not the systems that are deemed worthy of maintenance in sustainability discourse. Discussions of Sustainable AI are not about how we can maintain AI systems – they are about whether AI systems help or hamper maintaining other relevant systems. In this respect, they must be considered artefacts.
One of the many sustainability concerns attached to AI models, considered as artefacts and in isolation, is their energy consumption and the associated carbon emissions. Another, even less researched aspect is the environmental impact of the material hardware required to run AI models. We have cited sources for both in Sect. 2. If we were to make AI models more sustainable as an artefact, a first approach would be to design AI models in a more energy-efficient way, thus reducing the carbon emissions of individual training and tuning runs. We could also consider innovating for more efficient and environmentally conscious data centres, e.g., ones whose computing units require fewer rare earth minerals to be built or ones with more efficient cooling systems or better waste water management.
Analyses that track the sustainability of AI at the artefact level exist. They assess the environmental impact of individual algorithms or weigh the merits of concrete AI applications against their harms (e.g., [22, 47, 60, 65]). Nevertheless, similar considerations as in the case of the electric car come to bear; a systems-perspective on sustainability reveals that the technical make-up of an AI artefact, even considering the hardware on which it is run, cannot determine its sustainability in isolation. Proponents of a more whole-scale analysis have argued in this manner. Besides the work that has already been cited, the work of Henrik Skaug Sætra needs to be mentioned. In response to a study on AI technology’s effects on the Sustainable Development Goals (SDGs) [65], Sætra argues that the approach that has been taken, an approach which considers AI interventions on a case-by-case basis, overlooks macro-level, indirect, and ripple effects. To assess the true sustainability impact of AI applications, he maintains, AI must be considered in context and “as a part of a sociotechnical system consisting of various structures and economic and political systems” [55, p. 5]. An analysis without this context in mind threatens to overemphasise positive impacts of AI.
Hilty and colleagues [32, 33] are pioneers in this regard as they point out how efficiency gains in the electricity consumption of computing can lead to higher energy consumption overall. This is because computing simultaneously becomes cheaper and hence more widely available. The development of AI models is currently reserved to large corporations and nation states in possession of the necessary capital. We can easily imagine a number of ecological and social consequences, both positive and negative, if AI development were to become more energy-efficient and hence cheaper.
Kate Crawford’s and Vladan Joler’s Anatomy of an AI System presents systemic components such as procurement chains, ecosystem services and human labour organisation as integral parts to an AI device, conditioning it and hence suggesting avenues of intervention for sustainability outside of its immediate technical make-up [19].
Finally, Christoph Becker tackles the issue from a computer science perspective, finding that previous efforts of posing sustainability as a design challenge to computer engineers did not make the field and its practices more sustainable long-term [3, p. 9]. Becker critiques cultures of computer engineering which assume what he calls “solvency”. Solvency implies that sustainability is framed to and by technical professionals as a computationally solvable problem (energy and resource efficiency) and not in a way that focusses on the world in which the energy and resource consumption of a particular technology become a problem (p. 117). Becker urges to retire this purely positivist and operationalist thinking (p. 210) and to restructure and reorient engineering practice in alignment with social requirements (p. 228). This is a structural approach since it calls to restructure the discipline in view of how current practices affect sustainability.
Some literature on Sustainable AI thus explicitly favours an approach which is structural in the sense that it moves beyond a second wave view of technology as particular artefacts and their design to include the idea of a sociotechnical system. Second wave approaches are criticised for being isolationist, i.e. considering AI models isolated entities to be optimised by technical professionals [29]. Sustainable AI approaches react to second wave approaches by pointing out that an artefact-level analysis is not suitable to properly address all sustainability concerns, many of which occur at a higher level of analysis. This may explain why social and ecological costs of AI, costs related to sustainability, have been described as “hidden” [21, 62]: Through the lens of an isolationist, second wave AI ethics, they are invisible. And indeed, the whole-scale impact of AI technologies, along their life cycle and in integration with human societies, is currently still ill-understood.
4.2 A political turn
There is a second sense in which many Sustainable AI approaches can be considered structural. This sense relates to the invisibility of sustainability concerns from a second wave perspective. It is concerned with the higher-level question of why a second wave approach is favoured by regulators and in public debate. It thus includes an analysis of power structures. For a more thorough exposition on this topic in a related context, we refer to the book “Data Ethics of Power” by Gry Hasselbalch [31] in which she outlines the larger socio-technical structure within which technology is developed and used. Similar, although less developed critiques have been brought forth in the Sustainable AI space.
In an even more recent publication, Sætra connects an isolationist view of technology to the perspective of a limited group of stakeholders. Engineers, developers and analysts of technology, he argues, approach problems from the perspective of particular technologies used to solve particular problems—“everything is a nail to a person with a hammer” [57, p. 2]. Their approach, Sætra contends, is thus naturally techno-solutionistFootnote 1, one that, he concludes, addresses symptoms, not root causes [56]. Becker marries solutionism to “problemism”, a perspective which condemns its holders to perceive complex situations as solvable puzzles, obstructing other ways to make sense of them [3, pp. 113 ff.]. In the same volume as Sætra, Benedetta Brevini presents techno-solutionist movements, like eco-modernism, as widely embraced by corporate giants who traditionally oppose climate action and whom techno-solutionist attitudes serve as an apology for the status quo [11]. Working from a political economy lens, Peter Dauvergne cautions against an “environmentalism of the rich” [21, pp. 149 –50] which reflects the interests of those with wealth, privilege, and power rather than the interests of those most vulnerable to environmental degradation. Dauvergne illustrates how an ideology which sees discrete, manageable, technical problems—he labels this ideology “market liberalism” [21 p. 189], but we might as well call it techno-solutionism—blinds the public to the possibility of adverse uses of AI against sustainability. This ideology can thus become a powerful tool for those with vested interests to overstate the value and conceal the risks of AI [21, p. 196]. What all this nascent research on Sustainable AI suggests is that, if there is a connection between an isolationist view on AI and a techno-solutionist ideology, AI isolationism may well serve the interests of powerful actors to the detriment of global sustainability.
As the preceding discussion has shown, there seems to be a push in Sustainable AI literature away from analysing isolated AI artefacts and their impacts towards a structural analysis of AI as part of a sociotechnical system embedded in other systems and structures to the effect of uncovering concerns that are difficult to detect at the artefact level. Some authors pair this reconceptualisation with an analysis of those power structures that lead to the adoption of isolationist, techno-solutionist approaches. The difference between the second and third wave can hence be articulated thusly: The second wave approach relies on better design to bring about systemic or structural change. The third wave approach relies on systemic or structural change to bring about better design. This formulation should also make clear how the ‘structural’ component is integrated in third wave approaches. Our distinction should not be misunderstood to mean that second wave approaches have no concept of structural problems. This is decidedly not the case. In fact, the next section will present an example of second wave approaches tackling bias and fairness issues in AI models, arguably a matter of social structures manifesting in technology. Second wave approaches can hence have a concept of how technological design is influenced by social structures. However, structural issues appear in second wave approaches as issues to be corrected in technology and by technological design. In third wave approaches, by contrast, technology and technology design are conceptualised as themselves embedded in structures and systems, the ensemble becoming the primary locus of intervention.
To be sure, not all literature on Sustainable AI performs this structural analysis. There are many publications, both in academic and in policy spaces, that do not perform any such reconceptualisation (e.g., [47, 52, 65]). It is nevertheless interesting to note that a new perspective, one that challenges current conceptualisations of AI technologies, is gaining traction.
4.3 High tide: the third wave beyond Sustainable AI
Having presented how literature on Sustainable AI introduces a new perspective in reaction, sometimes in opposition to second wave AI ethics, we now turn our attention to arguing that the structural turn we have identified extends beyond the Sustainable AI literature. To do this, we draw on an illustrative example of AI ethics discourse in a different domain: literature on bias and fairness in AI.
Initially (and still), a majority of publications in this area had suggested computational solutions to problems of bias and fairness and thus addressed concerns primarily at the technical, artefact level [5, 13, 23, 25, 30, 35]. Addressing these issues at this level also meant that definitions of bias and fairness needed to be operationalisable for engineers working with AI and hence had to be expressed in terms of statistical distributions [27, 36, 38, 39]. The level of analysis thus dictated how problems of bias and fairness could be framed, approached, and solved.
Critics of these attempts point out that construing bias and fairness in this way leads to a tendency to overemphasize “bad algorithms” or “bad data” as the main source of discrimination [34]. And indeed, proponents of a computational approach have argued that it is algorithms that discriminate both by being “wrong”, as in relying on erroneous data, and by being “right”, as in reproducing existing prejudices [2].
It should be duly noted that computational solutions to bias, fairness and the role of AI are representative of the second wave of AI ethics, as put forward by van Wynsberghe and further developed here: They answer to immediate, practical concerns of current machine learning techniques, the concerns are addressed within the technical systems that bring them to light, and they are short-term solution-oriented. It should also be noted that the technical systems considered are particular AI algorithms, i.e. singular artefacts, and it is assumed that the design of these systems has a major impact on their moral properties and impacts.
In an attempt to move beyond this narrow focus on particular AI algorithms, critics of computational accounts have cautioned that the latter overlook the very core of bias and fairness concerns. For example, Anna Lauren Hoffmann contends that computational solutions exclusively focus on patterns of advantage and disadvantage across groups and thus distributions of outcomes. They are hence necessarily reactive and superficial, as the root cause of these distributions, i.e. social hierarchies, is neglected. As such, Hoffmann argues, algorithms construct or uphold the normative backdrop, i.e. social and cultural meanings attached to group classifiers, against which distribution is performed in the first place [34]. In other words, hegemonic group classifiers have to be reproduced first in order to then correct towards a fairer distribution of outcomes.
In her argument, Hoffmann relies on previous work in antidiscrimination scholarship that urges to shift focus away from distribution towards underlying structures [1]. Distribution, this tradition argues, cannot account for justice issues related to the design of social, economic, and physical institutions that structure decision-making power and shape normative standards of identity and behaviour [68].
This turn towards institutions, power, and normative standards of identity and behaviour is echoed and further developed in the work of Ruha Benjamin on technology and race. Benjamin criticizes technology design as a dominant framework for addressing social problems. Not only can design be discriminatory, as she shows along a list of examples; its purported neutrality or ‘colour-blindness’ can also help gloss over, perpetuate, and justify existing inequities [4]. Despite not being fed racial information, algorithms often learn to sort along racial lines and thus provide rationalisations for why a population, for example, a workforce, remains racially stratified. Algorithms cannot be neutral if they are developed and employed in racialised societies. Benjamin hence urges to take a closer look at how sociotechnical systems are constructed, by whom, and to what ends [4]. The ethics of technology that Benjamin proposes thus advocates for reforming sociotechnical systems as a whole, not fixing singular technologies or tokens of technologies.
It is work like Hoffmann’s and Benjamin’s that prompts Matthew Le Bui and Safiya Umoja Noble to perform an analysis of the underlying ideologies and power structures that inform the as-of-yet dominant computational view on AI bias and fairness. They find that the dominant moral framework, which they identify with classic liberalism, fails to fully capture how technologies are tied to, and embedded with, power. They observe that discussions on AI fairness and justice are predominantly techno-centric, techno-deterministic, and rooted in neoliberal logics of corporate solutionism. Bui and Noble call for a new moral framework for AI, one that addresses rising global social and economic inequality and how AI can overdetermine structures of power, instead of one that sees bias as a feature of or externality to AI that can be corrected or resolved [40].
Lin and Chen respond to these arguments by developing a “structural injustice” account of AI bias and fairness. They argue that a broader group of people, beyond AI engineers, is and should be held responsible for shaping the social structure. They hence propose collective action recommendations for AI development [42].
As this short run-down of discourse on AI bias and fairness illustrates, the response of critics of second wave, computational approaches has been to turn away from viewing AI technologies as isolated artefacts and towards a more structural analysis. What is more, we also find a structural turn in the second sense, namely in the sense of an analysis of those power structures that favour second wave approaches.
Similar argumentative structures are thus in play in AI bias and fairness discussions as they are in the Sustainable AI literature. We furthermore suggest that the same can be said for approaches like the aforementioned Data Ethics of Power [31], the relational AI ethics of Abeba Birhane [6] and some decolonial AI approaches [45], although we cannot defend this claim within the limits of this paper.
Finally, we cannot forget that these developments in AI ethics do not occur in a vacuum. The structural approaches we have been discussing are notably, often explicitly, inspired by more generalised disciplines and methodologies, including Science and Technology Studies (STS), Critical Systems Thinking (CST), critical theory, feminist and other standpoint epistemologies, Bruno Latour’s Actor-Network-Theory (ANT) and others. Discussions about shifts and turns in AI ethics are meta-discussions which necessarily draw on more general reflections on technology. A final verdict whether the developments in AI ethics we describe mirror developments in these general reflections falls outside the scope of this paper, but poses an interesting research perspective.
To conclude this section, we have suggested that the third wave of AI ethics rests on a turn towards a structural approach for uncovering ethical issues at a systemic level, often paired with an analysis of power structures that prevent the uncovering of these issues. We further argued that some work in Sustainable AI performs this structural turn, especially work that explicitly theorises its own approach. Following this, we aimed to show that third wave AI ethics, characterised by this structural turn, has already been underway outside of environmental justice issues (addressed by Sustainable AI) and has happened as a response to the second wave of AI ethics (discussions taking place at the artefact level, often proposing technical solutions). At this point we leave the reader with the idea that Sustainable AI is part of the third wave of AI ethics but does not constitute this third wave entirely. Indeed, it might be better understood as an instantiation of a more pervasive turn towards systems and structures in AI ethics and beyond. As we have shown in our discussion of bias and fairness issues, structural approaches to AI ethics precede a focus on issues of environmental impact.
5 Conclusion
Is Sustainable AI a “true game-changer”? If its proponents keep refining the third wave approach, it at least has a shot. The structural turn of third wave AI ethics constitutes an attempt to break out of the familiar technological mode of thinking which calls for efficient technological solutions to discrete, manageable problems. Its main contention is that certain problems should not and cannot be seen through the lens of particular technologies, but need to be addressed on a higher level of analysis which perceives technologies as part of a bigger sociotechnical system. From this angle, it may turn out that ‘ethics as usual’ proves ineffective—that rather social and economic power dynamics both condition the problem and favour techno-solutionist approaches. A reframing of the problem in this way opens up new avenues for intervention. If this changes the sustainability game–only time can tell.
Notes
This is a judgment he, incidentally, also extends to the political framework of the SDGs.
References
Bagenstos, S.R.: The structural turn and the limits of anti-discrimination law. Calif. L Rev. 94, 1–48 (2006)
Barocas, S., Selbst, A.D.: Big data’s disparate impact. Calif. L Rev. 671, 671–732 (2016)
Becker, C.: Insolvent: How to Reorient Computing for Just Sustainability. MIT Press, Cambridge, Massachusetts (2023)
Benjamin, R.: Race after Technology: Abolitionist Tools for the New Jim Code. Polity, Medford (2019)
Berk, R., Heidari, H., Jabbari, S., Kearns, M., Roth, A.: Fairness in criminal justice risk assessments: the state of the art. Sociol. Methods Res. 50(1), 3–44 (2021)
Birhane, A.: Algorithmic injustice: a relational ethics approach. Patterns 2(2), 1–9 (2021)
Bolger, M., Marin, D., Tofighi-Niaki, A., Seelmann, L.: Green Mining’ is a myth: The case for Cutting EU Resource Consumption. European Environmental Bureau & Friends of the Earth Europe, Brussels (2021)
Bolte, L.: Conceptual Foundations of Sustainability: A Sustainability Perspective on Artificial Intelligence: Extended Abstract. In: Katsumi, M., Toyoshima, F., Sanfilippo, E. (eds.) FOIS 2023 Early Career Symposium (ECS), held at FOIS 2023, co-located with 9th Joint Ontology Workshops (JOWO 2023). CEUR Workshop Proceedings (2024)
Bolte, L., Vandemeulebroucke, T., van Wynsberghe, A.: From an ethics of carefulness to an ethics of desirability: going beyond current ethics approaches to sustainable AI. Sustainability 148, 4472 (2022)
Bostrom, N.: Existential risks: analyzing human extinction scenarios and related hazards. J. Evol. Technol. 9 (2002)
Brevini, B.: Artificial intelligence, artificial solutions: Placing the climate emergency at the center of AI developments. In: Sætra, H.S. (ed.) Technology and Sustainable Development: The Promise and Pitfalls of Techno-Solutionism, pp. 23–33. Routledge, New York (2023)
Cai, B., Sheng, C., Gao, C., Liu, Y., Shi, M., Liu, Z., Feng, Q., Liu, G.: Artificial intelligence enhanced reliability assessment methodology with small samples. IEEE Trans. Neural Netw. Learn. Syst. 349, 6578–6590 (2021)
Calders, T., Sicco, V.: Three naive bayes approaches for discrimination-free classification. Data Min. Knowl. Discov. 21, 277–292 (2010)
Center for AI Safety: Statement on AI Risk: AI experts and public figures express their concern about AI risk. https://www.safe.ai/work/statement-on-ai-risk#open-letter (2024). Accessed 25 Mar 2024
Coeckelbergh, M.: Green Leviathan or the Poetics of Political Liberty: Navigating Freedom in the age of Climate Change and Artificial Intelligence. Routledge, New York (2021)
Corrêa, N.K., Galvão, C., Santos, J.W., Del Pino, C., Pinto, E.P., Barbosa, C., Massmann, D., Mambrini, R., Galvão, L., Terem, E., de Oliveira, N.: Worldwide AI ethics: a review of 200 guidelines and recommendations for AI governance. Patterns 4(10), 100857 (2023)
Couldry, N., Powell, A.: Big data from the bottom up. Big Data Soc. 12, 2053951714539277 (2014)
Crawford, K.: The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press, New Haven (2021)
Crawford, K., Joler, V.: Anatomy of an AI System. https://anatomyof.ai/ (2018). Accessed 25 March 2024
Crojethovich Martín, A.D.: A.J. Rescia Perazzo Organizacíon Y sostenibilidad en un sistema urbano socio-ecológico y complejo. Revista Int. De Tecnologıa Sostenibilidad Y Humanismo 1 103–121 (2006)
Dauvergne, P.: AI in the Wild: Sustainability in the Age of Artificial Intelligence. MIT Press, Cambridge (2020)
Dodge, J., Prewitt, T., Tachet des Combes, R., Odmark, E., Schwartz, R., Strubell, E., Luccioni, A.S., Smith, N.A., DeCario, N., Buchanan, W.: Measuring the carbon intensity of AI in cloud instances. Proceedings of the ACM Conference on Fairness, Accountability, and Transparency (FAccT’22), June 21–24, Seoul, Republic of Korea, pp. 1877–1894 (2022)
Dwork, C., Hardt, M., Pitassi, T., Reingold, O., Zemel, R.: Fairness through awareness. In: Proceedings of the 3rd innovations in theoretical computer science conference (ITCS) 2012, Cambridge, MA USA, pp. 214–226 (2012)
Falk, S., van Wynsberghe, A.: Challenging AI for sustainability: what ought it mean? AI Ethics (2023). https://doi.org/10.1007/s43681-023-00323-3
Feldman, M., Friedler, S.A., Moeller, J., Scheidegger, C., Venkatasubramanian, S.: Certifying and removing disparate impact. In: Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining (KDD’15), August 10–13, 2015, Sydney, NSW, Australia, pp. 259–268 (2015)
Floridi, L.: Infraethics: on the conditions of possibility of morality. Philos. Technol. 30, 391–394 (2017)
Friedler, S.A., Scheidegger, C., Venkatasubramanian, S.: The (Im) possibility of fairness: different value systems require different mechanisms for fair decision making. Commun. ACM 64(4), 136–143 (2021)
Future of Humanity Institute: Future of Humanity Institute (2005–2024) https://www.futureofhumanityinstitute.org/ (2024). Accessed 02 July 2024
Hagendorff, T.: The ethics of AI ethics: an evaluation of guidelines. Minds Mach. 30(1), 99–120 (2020)
Hajian, S., Domingo-Ferrer, J.: Direct and indirect discrimination prevention methods. In: Custers, B., Calders, T., Schermer, B., Zarsky, T. (eds.) Discrimination and Privacy in the Information Society: Data Mining and Profiling in Large Databases, pp. 241–254. Springer, Berlin and Heidelberg (2013)
Hasselbalch, G.: Data Ethics of Power: A Human Approach in the Big Data and AI Era. Edward Elgar Publishing, Cheltenham, UK and Northampton, MA USA (2021)
Hilty, L.M., Aebischer, B.: ICT for sustainability: An Emerging Research Field. In: ICT Innov. Sustain., pp. 3–36 (2015)
Hilty, L.M., Köhler, A., Von Schéele, F., Zah, R., Ruddy, T.: Rebound effects of progress in information technology. Poiesis Prax. 4(1), 19–38 (2006)
Hoffmann, A.L.: Where fairness fails: Data, algorithms, and the limits of antidiscrimination discourse. Inf. Commun. Soc. 227, 900–915 (2019)
Kamiran, F., Calders, T.: Data preprocessing techniques for classification without discrimination. Knowl. Inf. Syst. 33(1), 1–33 (2012)
Kearns, M., Neel, S., Roth, A., Wu, Z.S.: An empirical study of rich subgroup fairness for machine learning. In: Proceedings of the conference on fairness, accountability, and transparency (FAT* ’19), January 29–31, 2019, Atlanta, GA, USA, pp. 100–109 (2019)
Khakurel, J., Penzenstadler, B., Porras, J., Knutas, A., Zhang, W.: The rise of artificial intelligence under the lens of sustainability. Technologies. 6(4), 100 (2018)
Kilbertus, N., Rojas Carulla, M., Parascandolo, G., Hardt, M., Janzing, D., Schölkopf, B.: Avoiding discrimination through causal reasoning. In: Guyon, I., Von Luxburg, U., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R. (eds.) Advances in Neural Information Processing Systems 30. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. NeurIPS Proceedings (2017)
Kleinberg, J., Mullainathan, S., Raghavan, M.: Inherent trade-offs in the fair determination of risk scores. arXiv Preprint, 160905807 (2016)
Le Bui, M., Noble, S.U.: We’re missing a moral framework of justice in artificial intelligence. In: Dubber, M.D., Pasquale, F., Das, S. (eds.) The Oxford Handbook of Ethics of AI, pp. 163–179. Oxford University Press, New York (2020)
Lemmens, P., Blok, V., Zwier, J.: Toward a terrestrial turn in philosophy of technology. Techné. 212/3, 114–126 (2017)
Lin, T.A., Chen, P.H.C.: Artificial intelligence in a structurally unjust society. Fem. Philos. Q. (2022). https://doi.org/10.5206/fpq/2022.3/4.14191.
Martin, K.: Ethical implications and accountability of algorithms. J. Bus. Ethics. 160, 835–850 (2019)
Meadows, D.H., Meadows, D.L., Randers, J., Behrens, I.I.I.: Limits to Growth: A Report for the Club of Rome’s Project on the Predicament of Mankind. Universe Books, New York (1972)
Mohamed, S., Png, M.-T., Isaac, W.: Decolonial AI: decolonial theory as sociotechnical foresight in artificial intelligence. Philos. Technol. 33, 659–684 (2020)
Müller, V.C., Bostrom, N.: Future progress in artificial intelligence: A survey of expert opinion. In: Müller, V.C. (ed.) Fundamental Issues of Artificial Intelligence. Synthese Library, pp. 553–571. Springer, Berlin (2016)
Palomares, I., Martínez-Cámara, E., Montes, R., García-Moral, P., Chiachio, M., Chiachio, J., Alonso, S., Melero, F.J., Molina, D., Fernández, B., Moral, C., Marchena, R., de Pérez, J., Herrera, F.: A panoramic view and swot analysis of artificial intelligence for achieving the sustainable development goals by 2030: progress and prospects. Appl. Intell. 51, 6497–6527 (2021)
Parthemore, J., Whitby, B.: What makes any agent a moral agent? Reflections on machine consciousness and moral agency. Int. J. Mach. Conscious. 502, 105–129 (2013)
Potapov, A.: Technological singularity: what do we really know? Information 9(4), 82 (2018)
Poullikkas, A.: Sustainable options for electric vehicle technologies. Renew. Sustain. Energy Rev. 41, 1277–1287 (2015)
Robbins, S., van Wynsberghe, A.: Our new artificial intelligence infrastructure: becoming locked into an unsustainable future. Sustainability 148, 4829 (2022)
Rohde, F., Wagner, J., Reinhard, P., Petschow, U., Meyer, A., Voß, M., Mollen, A.: Nachhaltigkeitskriterien für künstliche Intelligenz. Schriftenreihe Des. IÖW. 220, 21 (2021)
Roselli, D., Matthews, J., Talagala, N.: Managing bias in AI. In: Companion Proceedings of The 2019 World Wide Web Conference (WWW ‘19), May 13–17, 2019, San Francisco, USA. pp. 539–544 (2019)
Ryan, M., Antoniou, J., Brooks, L., Jiya, T., Macnish, K., Stahl, B.: Research and practice of AI ethics: a case study approach juxtaposing academic discourse with organisational reality. Sci. Eng. Ethics 27, 1–29 (2021)
Sætra, H.S.: AI in context and the sustainable development goals: factoring in the unsustainability of the sociotechnical system. Sustainability 13(4), 1738 (2021)
Sætra, H.S.: Conclusion. In: Sætra, H.S. (ed.) Technology and Sustainable Development: The Promise and Pitfalls of Techno-Solutionism for Sustainable Development, pp. 265–269. Routledge, New York (2023a)
Sætra, H.S.: Introduction. In: Sætra, H.S. (ed.) Technology and Sustainable Development: The Promise and Pitfalls of Techno-solutionism for Sustainable Development, pp. 1–9. Routledge, New York (2023b)
Sætra, H.S., Danaher, J.: Resolving the battle of short-vs. long-term AI risks. AI Ethics (2023). https://doi.org/10.1007/s43681-023-00336-y
Schermer, B.W.: The limits of privacy in automated profiling and data mining. Comput. Law Secur. Rev. 27(1), 45–52 (2011)
Strubell, E., Ganesh, A., McCallum, A.: Energy and policy considerations for deep learning in NLP. arXiv:1906.02243 (2019)
Tsamados, A., Aggarwal, N., Cowls, J., Morley, J., Roberts, H., Taddeo, M., Floridi, L.: The ethics of algorithms: key problems and solutions. In: Floridi, L. (ed.) Ethics, Governance, and Policies in Artificial Intelligence, pp. 97–123. Springer, Cham (2021)
van Wynsberghe, A.: Sustainable AI: AI for sustainability and the sustainability of AI. AI Ethics. 1(3), 213–218 (2021)
van Wynsberghe, A., Vandemeulebroucke, T., Bolte, L., Nachid, N.: Special issue towards the sustainability of AI; multi-disciplinary approaches to investigate the hidden costs of AI. Sustainability. 1424, 16352 (2022)
Vinge, V.: The coming technological singularity: How to survive in the post-human era. In: Latham, R. (ed.) Science Fiction Criticism: An Anthology of Essential Writings, pp. 352–363. Bloomsbury Publishing, London and New York (1993)
Vinuesa, R., Azizpour, H., Leite, I., Balaam, M., Dignum, V., Domisch, S., Felländer, A., Langhans, S.D., Tegmark, M., Nerini, F.: The role of artificial intelligence in achieving the sustainable development goals. Nat. Commun. 11(1), 233 (2020)
Wolf, B.: Big data, small freedom? Radic Philos. 191, 13–20 (2015)
Xu, F., Uszkoreit, H., Du, Y., Fan, W., Zhao, D., Zhu, J.: Explainable AI: A brief survey on history, research areas, approaches and challenges. In: Natural Language Processing and Chinese Computing: 8th CCF International Conference, NLPCC 2019, Dunhuang, China, October 9–14, 2019, Proceedings, Part II 8, pp. 563–574. Springer (2019)
Young, I.M.: Taking the basic structure seriously. Perspect. Polit. 4(1), 91–97 (2006)
Acknowledgements
This research was funded by the Alexander von Humboldt Foundation in the framework of the Alexander von Humboldt Professorship for the Applied Ethics of Artificial Intelligence endowed by the German Federal Ministry of Education and Research to Prof. Dr. Aimee van Wynsberghe. Prof. Dr. Aimee van Wynsberghe serves as a member of the editorial board of AI and Ethics. No part of this research was written by or with the help of generative artificial intelligence.
Funding
Open Access funding enabled and organized by Projekt DEAL.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
On behalf of all authors, the corresponding author states that there is no conflict of interest.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Bolte, L., van Wynsberghe, A. Sustainable AI and the third wave of AI ethics: a structural turn. AI Ethics (2024). https://doi.org/10.1007/s43681-024-00522-6
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s43681-024-00522-6