Abstract
This paper reflects on the tech industry’s colonization of the AI ethics research field and addresses conflicts of interest in public policymaking concerning AI. The AI ethics research community faces two intertwined challenges: In the first place, we have a tech industry heavily influencing the AI ethics research agenda. Secondly, cleaning up after the tech industry has implied that we have turned to value-driven design methods to bring ethics to AI design. But by framing research questions relevant to a technical practice, we have facilitated the technological solutionism behind the tech industry’s business model. Therefore, this paper takes the first steps to reshape the AI ethics research agenda by suggesting moving toward an emancipatory framework that brings politics to design while, at the same time, bearing in mind that AI is not to be treated as an inevitability. As a research community, we must focus on the repressive power dynamics exacerbated by AI and address challenges facing the vulnerable groups seldom heard, despite the fact that they are the ones most negatively affected by AI initiatives.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
AI ethics researchers have, over the years, engaged with the tech industry either as collaborators or as receivers of funding. A mix of business expertise and academic competencies can indeed lead to fruitful endeavors. Yet, we should not neglect the inherent clash between curiosity-driven and business interest-driven research. In this paper, the AI ethics community is broadly conceived of as researchers in computer science or researchers working at the intersection of computer science, philosophy, and social science, who are engaged in bringing ethics to the design of AI systems.
The tech industry is defined as companies heavily dependent on AI algorithms, such as Big Tech platform companies like Meta, Google, Microsoft, and Amazon, or tech companies dedicated to moving AI forward, i.e., DeepMind. In this setting, AI can be viewed as an umbrella term for narrow AI, referring to machine learning algorithms that produce models capable of learning from big data sets (often labeled by humans). Such models may perform as well as, or, sometimes, better than humans when making predictions or real-time decisions within restricted domains. On the other side, the ambition of artificial general intelligence (AGI), viz., AI capable of acting intelligent in unrestricted domains, is reflected in the belief that intelligence is computationally tractable. For example, according to researchers affiliated with DeepMind, AGI may presumably emerge from “sufficiently powerful reinforcement learning agents that learn to maximize future reward” [1]. DeepMind’s successes in applying reinforcement learning to complex problems should not be underestimated. However, such accomplishments do not entail that AGI is feasible. One should not forget Moravec’s paradox that much of what we do easily is hard to do for computers and vice versa [2]. Still, overpromising language about the capabilities of AI fuels an AI hype with overall negative implications for informed and balanced societal discussions about the promises and perils of AI.
For tech companies, sponsoring AI ethics research and collaborating with researchers provide opportunities to influence the research agenda [3]. Consequently, the corporate colonization of the AI ethics research agenda [4] shrinks ethical issues into one-dimensional problems, which, so it appears, can be solved with technical fixes [5]. Consequently, this paper identifies issues for the AI ethics research community to consider to free research from business interests. This step is not easy because the “industry has the data and expertise necessary to design fairness into AI systems.” [4]. However, we need an independent research community to set the direction for AI ethics research. Therefore, the paper outlines preliminary reflections on a research agenda, which emphasizes investigations of AI systems' impact on societal power dynamics.
The paper argues that the AI ethics research community faces two interdependent challenges. First, there is a need to build a research agenda free from the influence of tech industry funding. Secondly, much AI ethics research has been defensively carried out in response to the urgent need to clean up after the tech industry. Indeed, such research efforts have contributed valuable insights. However, there is a need to bring politics to design to the forefront and frame an emancipatory research agenda to increase attention to the challenges facing vulnerable groups, i.e., those who are seldom heard despite being the ones most negatively affected by AI initiatives.
Against this setting, the paper is organized as follows: Sect. 2 identifies conflicts of interest in public policymaking initiatives, and Sect. 3 elaborates on the tech industry’s increasing colonization of the AI ethics research field, which obstructs the formulation of research questions outside a technical practice. Next, as a springboard for deliberations about power dynamics, Sect. 3.1 introduces reflections concerning the marginalized groups most negatively affected by AI initiatives. Section 3.2 highlights the need for bringing politics to the AI ethics research agenda by increasing attention to power mechanisms and underscoring our obligation to advance moral criticism concerning the role of AI as an enabler in critical social domains. On this backdrop, a brief critical outline of the current value-driven AI design landscape is given. This step serves as a springboard for suggesting an AI ethics research agenda directed toward repressive power dynamics and emancipation.
2 The tech industry—from daring to caring
The optimistic technologist may yet be right: perhaps we have reached the point of no return. But why is the crew that has taken us this far cheering?[6]
Big tech's business idea is coined in the slogan: "Move fast and break things" by Mark Zuckerberg. The mantra encapsulates the philosophy behind an aggressive business strategy that enhances opportunities for moral wrongdoing. Yet, recently, we have witnessed a shift from a disruptive business strategy to a more responsible ditto for commercial and reputational reasons. Most tech companies have established ethics teams to handle the discomfort following in the wake of moral wrongdoing due to datafication. Metcalf, Moss [7] present an empirical study describing the role of “ethics owners” in Silicon Valley tech firms. Here, they flesh out three intertwined logics characterizing the tech industry. The tech industry assumes that ethical issues have technical solutions resulting in “an optimistic search for best practices” [7]. Clearly, “the technical approach is a good fit: technical criteria play to the strengths of technology companies” [8]. Meritocratic thinking permeates the industry, and people in the sector see themselves as the best equipped to tackle ethical challenges. This stance also implies a reluctance toward legislative initiatives as politicians are perceived as lacking an understanding of technical issues [7]. Also, there is a belief that the market will prevail and produce the best and most ethical AI applications in the long run.
Repeatedly, we have witnessed a tech industry working hard lobbying against legislation. Most recently in relation to the newly approved European Digital Services Package (DSP) [9], which includes the Digital Market Act for a fair digital market, and the Digital Service Act, which concerns legislation regarding disinformation, illegal content, transparent advertising, non-use of children's data for advertising, and transparency including algorithmic auditing.
Undoubtedly, tech companies will intensify corporate lobbying efforts to dilute the DSP once it applies (January the 1st, 2024). The tech industry operates based on the assumption that regulation hinders innovation. In contrast, an ethical turn offers a convenient opportunity to signal commitment, which may bolster the sector's reputation as responsible and caring. A recent example of governmental facilitated tech promotion is the Tech for Democracy Initiative initiated by the Ministry of Foreign Affairs in Denmark (MFAD) in collaboration with the US presidential administration. A dedicated Tech for Democracy homepage presents outcomes from a conference held in November 2021 "to kick-start a multistakeholder dialogue" [10]. The MFAD's presentation of the initiative embraces tech companies by even setting out to "rediscover the techno-optimism." A vision like that certainly provides a golden opportunity for tech companies to renovate business opportunities after the recent years of techlash:
To make technology work for, not against, democracy, governments, multilateral organizations, tech companies and civil society must come together to renew our shared commitment to a responsible, democratic and safe technological development. We must forge new partnerships to deliver concrete solutions and support civil society's digital resilience and mobilization. A meaningful inclusion of civil society is vital to ensure a broad representation and to leave no one behind in the technological development. The Tech for Democracy initiative focuses on concrete solutions to make digital technology support democracy and human rights—and rediscover the techno-optimism of the internet's early days.
[11]
Unsurprisingly, at the Tech for Democracy conference [10] Facebook's President (then Vice President) for Global Affairs, Nick Clegg, was busy avoiding taking any responsibility [12]. As a former Deputy Prime Minister of UK, Clegg masters the political game. Similarly, in the wake of the presentation of The Danish Strategy of Digitalization 2022–2025 [13], the Minister for Finance, Nicolai Wammen, has set up a Board of Digitalization, which is supposed to function as a governmental technology advice organ to ensure a responsible and well thought through continued digitalization of the public sector. However, it is highly problematic that Wammen appointed the General Manager at Microsoft Denmark & Iceland, Nana Bule, as head of the board. Even though the tech industry's commitment gets wrapped up in governmental initiatives, the industry's participation on policy panels does not provide evidence that tech companies put ethics before profit. After all, one should not forget that business-driven ethics equal business risk accounting as a means to eliminate the techlash. This kind of ethics primarily focuses on consumers and does not include attention to societal issues and the needs of vulnerable groups [14]. When 'doing ethics' becomes performative, it hinders the enactment of ethical values [7].
3 Reclaiming the AI ethics research agenda—a first step
Research traditions may vary across cultures and disciplines, which may prioritize in different manners when organizing and conducting research—e.g., systematic evidence-based reviews are demanded in healthcare but not necessarily in philosophy. Despite such differences, responsible conduct of research implies following high standards as those promoted in international guidelines [15,16,17]. Such standards emphasize the cornerstones of research integrity: honesty, transparency, and accountability, which ensure trustworthiness and high integrity in research. In addition, fundamental principles of freedom of research imply the right to explore and define research questions freely. Consequently, to ensure transparency, researchers must disclose if private companies have funded their research or other conflicts of interest that might possibly compromise their research's trustworthiness.
Against this backdrop, it goes without saying that tech industry interests should not dictate research directions. Yet, Abdalla and Abdalla [3] point to leading AI conferences and workshops, which rely on tech funding and note that “while the conference organizers provide a Statement Regarding Sponsorship (…) it is not clear how effective such a policy is at preventing the unconscious biasing of (…) researchers” [3]. The authors do not attack researchers' integrity but identify a systemic conflict of interest in academia implying that researchers select and zoom in on research questions that align with a research agenda serving the tech industry’s interests.
To cope with the challenges of tech industry funding, they suggest taking immediate and commonly acknowledged steps, such as requesting researchers to declare funding information and demanding universities to state their position on tech industry funding. In addition, they recommend separating AI ethics research from computer science, leaving “academia-industry relationships for technical problems where funding is likely more acceptable” [3]. However, transdisciplinary involvement is required to move AI and the AI ethics field forward. Ethicists must “grapple more rigorously with the technical proposals (…) and ensure that critiques with operational implications reach the ears of the computing community” [18].
Moreover, funded or not, the AI ethics research community has been busy cleaning up the mess after the tech industry. It is hard to see what else we could and can do. However, AI ethics ought also to move beyond a technical practice in which we seek to proactively address value issues in a more or less stand-alone design context.
Consequently, there is a need to reshape the AI ethics research agenda by frontloading the role of power mechanisms and emphasizing the research community's obligation to team up with the weaker part as well as to participate in shaping policy issues. Therefore, we now turn to reflections concerning marginalized groups without a voice in society, i.e., the vulnerable groups, who are, at the same time, those most negatively affected by AI. These reflections are followed by a brief critical account of the current value-driven AI design landscape as a springboard for suggesting an AI ethics research agenda in which interventions are grounded in issues concerning power dynamics and emancipation.
3.1 The vulnerable groups in society
Globally, democracies have witnessed political polarization caused by economic inequality and the marginalization of large societal groups. The US capital attack, the yellow vests protests in France, and gang-related riots in Sweden have in common a shared frustration of not feeling heard or included in society. To them, an elite group of experts rules the world.
In this setting, Estlund’s account of the threat of epistocracy [19] is worth mentioning. If experts significantly control social power, we risk a skewed distribution of political authority favoring epistemically elite groups. As such, legitimization of political decisions is endangered of being undermined by making democratic participation difficult for groups who are not experts but may still have qualified reasons to reject the expertise of the elite expert group.
Correspondingly, Washington and Kuo [14] refer to Becker's analysis of disparities in the right to be heard. Within social systems, knowledge claims from subordinate groups have less impact than claims from superordinate ditto. Subordinate groups lack strategic knowledge. It is challenging to influence established agendas from that position, and their knowledge claims are easily dismissed as pure complaints.
Similarly, drawing upon the analogy of "digital poor houses," Eubanks notes that those most negatively affected by AI systems are vulnerable groups with no voice in society. In her investigations of the impacts of predictive risk modeling on lower-class Americans, she argues that the view of poverty as a problem of the individual is mirrored in profiling tools widely integrated into public social services. Consequently, “poverty management" by profiling becomes the political solution to a complex socio-economic problem, and "the digital poorhouse (…) redefines social work as information processing, and then replaces social workers with computers. Humans that remain become extensions of algorithms” [20].
Sadly, this observation echoes Weizenbaum's insights made almost fifty years ago, illustrating that little progress has been made since computers were introduced into society. In his legendary book, Computer Power and Human Reason, Weizenbaum argues that we have not experienced a computer revolution because we have used the computer to conserve rather than to innovate:
(…) It may be that social services such as welfare could have been administrated by humans exercising human judgment (…). But the computer was used to automate the administration of social services and to centralize it along established political lines. (…) Many of the problems of growth and complexity that pressed insistently and irresistibly for response during the postwar decades could have served as incentives for social and political innovation. (…) Yet, the computer did arrive "just in time." But in time for what? In time to save—and save very nearly intact, indeed, to entrench and stabilize—social and political structures that otherwise might have been either radically renovated or allowed to totter under the demands that were sure to be made on them. The computer then, was used to conserve America's social and political institutions [6]
Furthermore, Eubanks gives an example of how historically biased unfair practices carry over to algorithmic models implying that such models “confuses parenting while poor with poor parenting” [20]. In a Danish context, in cases concerning forced placement or removal of children, Greenlandic parents living in Denmark fail more often than Danish parents when they have to undergo a non-digitalized psychological test assessing their parenting skills. The test evaluates parents' ability to recognize facial expressions (Minds in the Eye test) and figures (Rorschach test) from the perspective of the Danish culture. The test does not consider the Greenlandic culture and the Greenlandic way of parenting. Although this case does not concern digital profiling, it still exemplifies how culturally and socially constructed classification systems work and might produce biased raw material for algorithmic models, which amplifies inequity.
The Danish digitalization strategy 2022–2025 (Finansministeriet, 2022) presents a vision that increased use of AI, chatbots, and robots will replace the annual workload of 10,000 people within the next ten years, making room for dedicated citizen service and care duties. Furthermore, the strategy sets out to enhance data-driven personalized public service to predict, among other things, child neglect. The vision has been formulated on the backdrop of a Danish public sector that has witnessed many IT scandals over the years. Recently, from 2019 to 2021, the Danish healthcare platform, “Sundhedsplatformen” (used by seventeen hospitals), applied an erroneous risk predictive model (EUROscore), which underestimated the risk of complications after bypass operations, implying that 500 patients may have been admitted to surgery despite the fact that they belonged to the group of patients with a higher risk for complications or death after surgery [21].
Yet the tech industry’s ongoing AI hype has caused the politicians and the public sector's uncritical welcome of AI [22]. However, intelligent use of artificial intelligence requires tremendous human resources to oversee and maintain systems, including attention to what constitutes ground truth data and how datafication challenges the public sector.
3.2 Reshaping the AI ethics research agenda—a commitment to the politics of design
We need to unite efforts to focus on power dynamics. AI power manifestations are elusive and not directly dangerous as, e.g., the powers of nuclear weapons, which after the Second World War caused Niels Bohr to demand international collaboration on atomic energy. Still, we should look for inspiration in movements that bring power critique and moral activism to the table. Here, Mitcham introduces the notion of professional scientific idealism [23], highlighting the role of central historical movements dedicated to shaping public policy issues concerning the societal impact of technology. Similarly, the AI ethics community should play a leading role in shaping public policy by insisting on voicing problems beyond those essential to the tech industry’s business model.
Increasingly, as the tech industry sets the agenda in the field of research, this gives rise to a domino effect, implying that public and other private research funds are geared towards the areas where there is a need to clean up after the tech giants. For example, as displayed in the Danish Velux Foundations announcement of a call in 2019 (100 million Danish kroner):
With a new interdisciplinary initiative, VILLUM FONDEN and VELUX FONDEN aim to strengthen the democratic development of the data-based society of the future. A targeted research initiative will generate and disseminate new insights and new solutions in close interaction with citizens, public- and private-sector decision-makers, civil society organisations, IT professionals and other stakeholders [24]
This research is, of course, relevant given our current situation. But it is high time that we look up from the clean-up work and detach ethics from the technical practice by framing questions otherwise and as free from the needs of the tech industry. There are no ethical algorithms, philosophical analyses of Black Box algorithms, digital literacy initiatives, or other kinds of technological fixes that can embrace, let alone solve, structural problems of social inequality and discrimination. Echoing Winner, “the key question is not how technology is constructed but how to come to terms with ways in which our technology-centered world might be reconstructed” [25].
Yet, politics of design has left the building (according to Bødker and Kyng [26], this is also the case in contemporary research within the field of Participatory Design). It has been replaced by approaches such as those reflected by the FATML community within machine learning that provides tools to, e.g., enhance transparency, mitigate bias in algorithms, and document the quality of data sets and models [27]. Likewise, value-driven design methods, rooted in the intersection of computer science, the humanities, and social sciences, such as value sensitive design (VSD) [28, 29], set out to design with departure in what matters to people in their lives, with a focus on ethics and morality [28, 30]. In doing so, VSD seeks to "front load" ethics to proactively handle ethical issues while including attention to stakeholder values in the design of technologies [31].
For example, AI for Social Good Value Sensitive Design (AI4SG-VSD) [32] maps design principles based on the values the EU high-level expert group puts forward on the ethics of AI, including attention to the sustainable development goals. Their approach applies value hierarchies to “visualize potential design pathways” and help prioritization among values in settling design requirements. The AI4SG-VSD approach provides a deeper understanding of the ethical challenges concerning the development and deployment of AI systems.
Yet, in VSD, there is no overall reflection or stance on how design choices are affected by the socio-technological patterns underneath. VSD has produced noble projects with selected vulnerable groups as stakeholders, see, e.g., [33, 34], viz, projects of doing good in unique contexts on the backdrop of a scene that taps harmoniously sporadic into political and environmental issues. But there is little attention to power dynamics and political conflicts beyond, e.g., the project's stakeholders or the given organizational setting or design context. Consequently, VSD excels in a kind of “depoliticized scholasticism” [25]. However, in highlighting eight grand challenges for VSD, Friedman, Harbers [35] take the first steps to acknowledge the need to account for power dynamics.
We must reclaim research as free of business interests and start framing AI as manifestations of power [35, 36]. Drawing a historical parallel to the industrial revolution, the Luddites turned against the machines. Still, they soon realized that the conflicts of automatization came from the way capital structured labor rather than the technology itself [37]. Similarly, we must wrestle with complex socio-political issues by emphasizing the power dynamics behind structural inequality. Fortunately, in recent years deliberative enclaves within the AI ethics community have gained traction. They have led to the creation of beacons, such as Black in AI [38] and the Distributed Artificial Intelligence Research Institute (DAIR) [39]. These communities have directed our attention to the overall power structures surrounding data environments by raising awareness of how, e.g., gender-biased AI reinforces social gender norms, as exemplified below by Crawford [36].
Furthermore, to lay out an emancipatory research agenda, we could look for inspiration in the early days of projects within the Scandinavian tradition of system development, which later developed into Participatory Design (PD) [40]. Here, the focus was on power dynamics and the empowerment of workers through democratic participation. Researchers took a departure in politics and concentrated on how computer systems challenged workers' autonomy and work situations. The tradition brought an (often Marxist) stance on the socio-technical structures underlying the process of system development. System developers "teamed up with the weaker side" [26].
Our focus was on contact with local unions as the key players in socio-technical change. At the same time we tried to establish a link to the society level by trying to influence laws and agreements to be more supportive to democratic local socio-technical change processes. We also participated in the public debate about technology and democracy at work [41]
One could argue that if we commit ourselves to an emancipatory framework, we risk moving beyond the limits of our research fields toward pure activism. Hansson [42] describes activism as the expression of a researcher's (in this case, an ethicist) personal standpoint on a subject matter and argues that such advocacy might reduce credibility and negatively influence scholarly work. He further warns that "it is a matter of professional responsibility never to profess an expertise that we do not possess." [42]. Clearly, as researchers, we are bound to respect the limits of our expertise. But we are not the only experts. Hence, traces of the Scandinavian tradition of system development's action-oriented research approach are mirrored in the DAIR community—“We believe that research should center the voices and experiences of those most impacted by technology and should be rooted in their communities” [39].
Moreover, according to a Habermasian division of knowledge interests in science,Footnote 1 exactly an emancipative approach is needed. Otherwise, "the social and technological patterns under study" escape us [25]. For example, when we blame algorithmic discrimination in facial recognition systems on human bias, we do not sufficiently attend to how AI systems exacerbate existing structural inequality—"appealing to the 'blind spots' of particular designers (…) ignores the structuring role of technology" [5].
As an example, in discussing gendered social norms, Crawford [36] describes IBM's failed attempt at debiasing facial recognition systems that worked for Caucasian males but had problems identifying women, especially women with darker skin. To solve the problem, IBM built an improved data set—Diversity in Faces. But, the classification behind the data set still labels gender as binary and operates with different races, reinforcing "politically, culturally and socially constructed" categories [36]. Likewise, Michelfelder, Wellner [43] question if designing binary gender into social robots (to accommodate for male and females' preferences) does anything else than amplifying “social expectation” [43]. According to the authors, gendering initiatives often backfire and amplify gender norms when confronted with reality. Once ICT and the fields of computer science and engineering were described as “masculine” by feminists, this masculine positioning further marginalized women in IT.
One can be morally and socially concerned, but it makes a difference whether our knowledge interests are motivated by an understanding of value problems as detached from power structures or by a quest for empowerment, focusing on “who benefits from social good and ‘who’ bears the risk of harm” [14].
4 Concluding remarks
The AI ethics research community faces two intertwined challenges: In the first place, we have a tech industry heavily influencing the AI ethics research agenda. Secondly, the AI ethics research community is busy cleaning up after the tech industry and anticipating AI ethical problems lurking on the horizon. We have turned to value-driven design methods to pro-actively bring ethics to the design of technology. But by framing research questions relevant to a technical practice, we have facilitated the technological solutionism behind the tech industry’s business model. Our research efforts have, of course, been relevant in response to the problems brought about by the cocktail of an overpromising tech industry and the political and societal embracement of technology. However, it is about time we take steps to reshape the AI ethics research agenda and dedicate ourselves to an emancipatory framework by increasingly voicing and focusing on the power dynamics exacerbated by AI, and by siding up with those most negatively affected by AI. In doing so, we face the challenge of teaming up with the weaker part while avoiding our interventions being rooted in academic ignorance nurtured by a paternalistic know-all attitude.
In sum, we need to settle whether our role is that of an integrator taking care of ethics by applying theoretical lenses or by engaging in design practices in which we provide locally anchored value-based design for and with specific stakeholders. Or if we are political agents who actively seek to influence policy issues and frontload investigations of power dynamics to design for the empowerment of the weaker part. Faced with these options, we should start reclaiming the AI ethics research agenda by emphasizing the politics of design.
Notes
i.e., technology as natural science’s way of seeking control through causal explanations with the purpose of prediction and explanation; understanding as the humanities ways of seeking knowledge through understanding and interpretation; and emancipation as a social scientific knowledge interest seeking to promote democratic values of equality, liberty, and justice.
References
Silver D, et al. Reward is enough. Artif Intell. 2021;299: 103535.
Mitchell M. Why AI is harder than we think. New York: Assoc Computing Machinery; 2014.
Abdalla M, Abdalla M. The Grey hoodie project: big tobacco, big tech, and the threat on academic integrity. AIES '21: Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, July. 2021; p. 287–297. https://doi.org/10.1145/3461702.3462563
Benkler Y. Don’t let industry write the rules for AI. Nature. 2019;569(7755):161–161.
Hoffmann AL. Where fairness fails: data, algorithms, and the limits of antidiscrimination discourse. Inf Commun Soc. 2019;22(7):900–15.
Weizenbaum J. Computer power and human reason: from judgment to calculation. San Francisco: Freeman; 1976.
Metcalf J, Moss E, Boyd D. Owning ethics: corporate Logics, Silicon Valley, and the Institutionalization of ethics. Soc Rese. 2019;86(2):449–76.
Slee T. The incompatible incentives of private-sector AI the oxford handbook of ethics of AI. Oxford: Oxford University Press; 2020. p. 106–23.
The digital services act package. 2022 https://digital-strategy.ec.europa.eu/en/policies/digital-services-act-package Accessed 21 July 2022.
Tech for Democracy. 2022 https://techfordemocracy.dk/ Accessed 21 July 2022.
Ministry of foreign affairs of Denmark. The tech for democracy initiative. 2022: https://um.dk/en/foreign-policy/tech-for-democracy-2021 Accessed 21 July 2022.
Bostrup J. Kofod har inviteret Facebook ind i kampen for demokratiet, Politiken. 9 Nov 2021; p. 12.
Finansministeriet, Digitalisering der løfter samfundet—den fællesoffentlige digitaliseringsstrategi 2022–2025; 2022. https://fm.dk/media/26022/digitalisering-der-loefter-samfundet_den-faellesoffentlige-digitaliseringsstrategi-2022-2025_web.pdf. Accessed 21 July 2022.
Washington A, Kuo R. Whose side are ethics codes on?: power, responsibility and the social good. In: Proceedings of the 2020 Conference on fairness, accountability, and transparency. Ithaca: ACM; 2020. p. 230-240.
Steneck N, Mayer T, Anderson M. Singapore statement on research integrity. J Anal Chem. 2011;66(6):650–2.
Research Integrity-the montreal statement. Chem Int. 2013;35(4):21–2.
Allea: The European code of conduct for research integrity, 2017. https://allea.org/code-of-conduct/. Accessed 21 July 2022.
Veale M, Binns R. Fairer machine learning in the real world: mitigating discrimination without collecting sensitive data. Big Data Soc. 2017;4(2):205395171774353.
Estlund D, Estlund DMM. Democratic authority: a philosophical framework. Princeton: Princeton University Press; 2007.
Eubanks V. Automating inequality: how high-tech tools profile, police, and punish the poor. 1st ed. New York: St Martin’s Press; 2018.
Skeem M. Sundhedsplatformen har fejlagtigt sendt patienter til hjerteoperation. Kardiologisk Tidsskrift, 11 November, 2021. https://medicinsktidsskrift.dk/behandlinger/hjertekar/3423-sundhedsplatformen-har-fejlagtigt-sendt-patienter-til-hjerteoperation.html. Accessed 21 July 2022.
Gerdes A. AI can turn the clock back before we know it. 2021 IEEE International Symposium on Technology and Society (ISTAS), 2021. https://doi.org/10.1109/ISTAS52410.2021.9629161.
Mitcham C. Professional idealism among scientists and engineers: a neglected tradition in STS studies. Technol Soc. 2003;25(2):249–62.
Velux. The velux foundations—DKK 100 million to strengthen democracy for artificial intelligence. 2019. Accessed 22 July 2022.
Winner L. Upon opening the black box and finding it empty: social constructivism and the philosophy of technology. Sci Technol Human Values. 1993;18(3):362–78.
Bødker S, Kyng M. Participatory design that matters—facing the big issues. ACM Trans Comput Hum Interact. 2018. https://doi.org/10.1145/3152421.
FATML. FATML -fairness, accountability, and transparency in machine learning. 2022 https://www.fatml.org/ Accessed 22 July 2022
Friedman B, Hendry D, Borning A. A survey of value sensitive design methods. Found Trends® In Hum Comput Interact. 2017;11(2):63–125.
Friedman B, Hendry D. Value sensitive design: shaping technology with moral imagination. Cambridge: The MIT Press; 2019.
Friedman B, Kahn P H, Borning A. Value sensitive design and information systems. In: Zhang P, Galletta D, editors. Human-computer interaction and management information systems: Foundations, Routledge; 2006. p. 348–372. https://doi.org/10.4324/9781315703619-27.
van den Hoven J. Value sensitive design, in the information society: innovation, legitimacy, ethics and democracy in honor of Professor Jacques Berleur. Boston: Springer; 2007.
Umbrello S, van de Poel I. Mapping value sensitive design onto AI for social good principles. AI and Ethics. 2021;1(3):283–96.
Woelfer JP, Hendry D. Designing ubiquitous information systems for a community of homeless young people: precaution and a way forward. Pers Ubiquit Comput. 2010;15(6):565–73.
JL Plass et al. RAPUNSEL: Improving self-efficacy and self-esteem with an educational computer game. In Proceedings of the 17th international conference on computers in education. 2009.
Friedman B, et al. Eight grand challenges for value sensitive design from the 2016 Lorentz workshop. Ethics Inf Technol. 2021;23(1):5–16.
Crawford K. Atlas of AI: power, politics, and the planetary costs of artificial intelligence. New Haven: Yale University Press; 2021.
G Mueller Breaking things at work the luddites are right about why you hate your job. 2021 Verso Books.
Black in AI https://blackinai.github.io Accessed 22 July 2022.
DAIR - Distributed AI research institute. 2022 https://www.dair-institute.org/about
Greenbaum J, Kyng M. Design at work: cooperative design of computer systems. Boca Raton: CRC Press; 2020.
Bødker S et al. Co-operative Design—perspectives on 20 years with ‘the Scandinavian IT Design Model’. Proceedings of NordiCHI 2000. https://www.researchgate.net/publication/237225075_Cooperative_Design_perspectives_on_20_years_with_%27the_Scandinavian_IT_Design_Model. Accessed 21 July 2022.
Hansson SO. Theories and methods for the ethics of technology. In: Hansson SO, editor. The ethics of technology: Methods and approaches. Rowman & Littlefield International; 2017. p. 1–14.
Michelfelder DP, et al. Designing differently: toward a methodology for an ethics of feminist technology design. In: Hansson SO, editor. The ethics of technology: methods and approaches; 2017. p. 193–218.
Acknowledgements
I would like to thank the reviewers for their thorough reviews and insightful comments that helped shape this paper.
Author information
Authors and Affiliations
Contributions
Anne Gerdes is the sole writer of this paper. The author read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Gerdes, A. The tech industry hijacking of the AI ethics research agenda and why we should reclaim it. Discov Artif Intell 2, 25 (2022). https://doi.org/10.1007/s44163-022-00043-3
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s44163-022-00043-3