Abstract
There is a problematic tradition of dualistic and reductionist thinking in artificial intelligence (AI) research, which is evident in AI storytelling and imaginations as well as in public debates about AI. Dualistic thinking is based on the assumption of a fixed reality and a hierarchy of power, and it simplifies the complex relationships between humans and machines. This commentary piece argues that we need to work against the grain of such logics and instead develop a thinking that acknowledges AI–human interconnectedness and the complexity in such relations. To learn how to live better with AI in futures to come, the paper suggests an AI politics that turns to practices of serious attentiveness to help us re-imagine our machines and re-configure AI–human relations.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
As the scope of advanced technology is growing, a grand challenge for researchers is to deal with problematic dualistic and reductionist thinking in artificial intelligence (AI) research. When researchers explored key themes in AI storytelling and imaginations (Cave et al. 2020; Fast and Horvitz 2016), they divided the themes into different variations of dichotomy categories such as “optimistic views on AI”, or “pessimistic views on AI”, meaning different hopes and fears about AI. Either the machines will save us or they will destroy us. Such reductionist thinking is also evident in leading voices in contemporary public AI debates (Bostrom 2014; Cellan-Jones 2014; FoLI 2015). This domination of dualistic thinking in AI debates is worrying, because such logic causes problems when applied to AI research and does not correspond well with real-world practices. Action should have been taken against such mystifying thinking about AI long ago, with advanced machine learning becoming omnipresent, it is time to get it right. We need to re-imagine our machines.
The intellectual tradition of dualistic thinking is deeply embedded in Western thought systems (Latour 1993). Our understanding of AI has been built on such dualisms, which in turn have affected much of how we think about and imagine—AI. In fact, research has shown that storytelling and imaginations of AI influence how AI is being developed, researched, accepted by the public, and regulated (Cave and Dihal 2019; Sartori and Theodorou 2022). Therefore, the stories we tell and how we tell them matter a great deal (Boyd 2009; Gottschall and Wilson 2005; Haraway 2018; Smith et al. 2017; van Dooren and Bird Rose 2016).
To live better with AI in the future, we need other stories. Stories that better reflect the complexity of real-world practices where AI is present. Taking into account that how we tell stories of AI systems affects how we then perceive these systems, it is time for an AI politics that finally takes our machines seriously. An AI politics that allows for the exploration of important ethical and political values embedded in dualistic thinking in what seems to be objective analyses. Such a proposition is crucial, especially for those working with these machines.
1.1 Pitfalls of Dualistic Thinking
What is troubling about dualisms is that they are grounded in a pre-assumed hierarchy that promotes the idea that there is a fixed reality—that is given and natural—behind dualistic pairs such as nature/culture and machine/human (Haraway 1989). This is particularly evident in machine–human relations, where these entities are commonly set up as opposites to each other, placed in a hierarchical relationship, and granted specific characteristics beforehand. This thinking incorporates ethical values and a politics of machine/human relations that work to enforce a particular order of power based on the idea of human exceptionalism. However, the problem is that there are no natural boundaries. These lines are part of our imagination. Our human ideas, values, decisions, and visions are part of our machines, just as they are part of us (Akrich 1992; Bijker et al. 1987). For example, doing an autopsy of an AI would reveal thousands of engineers. Therefore, when we encounter an AI system, it is not accurate to say that we are standing in front of an object. That explanation is too simplistic. In real-world encounters when AI systems and humans meet, they challenge these neat classifications. However, this is the argument of dualistic thinking—that entities (such as machines, humans, and other things) exist independently of each other. Although we are well aware by now that reality is much more complicated than dualisms suggest, and that boundaries between such categories are much more blurred in real-world contexts, our sciences are still willing to accept these dichotomies. For example, natural sciences have sought to explore the world independently of humans, and the social sciences have done the opposite (Latour 2000), largely ignoring the co-production of nature and society. This is why the dominant dualist analysis of AI should have been abandoned a long time ago. This means that when we imagine, study, and speak of AI, the focus should not be on AI as an isolated, singular object—but on the relations that produce AI. Haraway (1988) would refer to this as ‘situated knowledges’—that is, the state of something depends on how it is produced, which in turn differs from situation to situation. Therefore, what an AI is depends on many different things in many different situations. In the case of AI, scholars have shown that the object—AI—itself tends to collapse under close scrutiny (Lee 2021; Muniesa 2019). This means that how something exists is always relational, making AI a heterogenous trickster (to use the Harawayian language).
Continuing to put humans and AI systems as opposites in a hierarchical relationship (regardless of which entity is granted the ‘power’ over the other) will not help when trying to understand AI systems and their roles in society. Dualistic thinking represents a logic that is oversimplified and that avoids real-world complexity. In fact, we should never decide beforehand, who or what might be in power over another, or what is happening in a certain situation. That is to take analytical shortcuts. Differences should be the outcome of our studies, rather than a starting point. We should, therefore, pay more attention to what is actually happening in real-world encounters. Such actual encounters link humans and AI systems in many and multiple ways. Considering that knowing is a practice of ongoing intra-acting (Barad 2007), learning through such encounters would add to our understanding of what it means to be in relations with AI, how we co-exist, and how we develop together. This would also require an expansion of our political and ethical imaginary, where curiosity is key. An imaginary that promotes an openness towards surprises in how AI systems and their humans make relations with each other.
1.2 Storytelling—An Ethical and Political Practice
The history of AI storytelling, both in popular and scientific culture, is full of technological myths and misunderstandings. An emerging group of scholars have recognized the importance of AI storytelling and portrayals (Cave et al. 2018, 2020; Hermann 2020; Recchia 2020; Sartori and Theodorou 2022) and shown how AI storytelling influences AI research and how AI is being developed, implemented (Bareis and Katzenbach 2021; Cave et al. 2020), and regulated (Baum 2018; Cave et al. 2020; Johnson and Verdicchio 2017). For example, in line with such statements, studies have shown how engineers—imagining the users of their machines in the making—often view machine–user relations based on a technological determinism perspective (Fischer et al. 2020). Additionally, studies on robotics research have found that robotics researchers tend to believe that the “social impact of robots derives mostly from their technological capabilities and the aim is for society to accept and adapt to technological innovations” (Sabanovic 2010́). That is, AI storytelling based on technological myths is being built into our research projects and affects how AI is researched. This way, AI storytelling significantly affects our collective imagination and perception of these machines, which in turn impacts future visions of AI and how it is researched (Campolo and Crawford 2020).
However, although a group of scholars has pointed to the significant impact of the construction of AI narratives (Cave et al. 2020; Hermann 2020; Sartori and Theodorou 2022)—they fail to acknowledge the pitfalls of dualistic thinking. The fact that we might not notice such routine thinking and the problems it brings, highlights the need to acknowledge our storytelling practices (Dourish and Gómez-Cruz 2018). This is important because stories do more than just tell stories. Engaging in storytelling is also a political and ethical practice. It is through our stories that we shape the conditions for our AI systems’ existence, and it therefore “matters what stories we use to tell other stories with” (Haraway 2016). It is through storytelling that we produce our realities (Seaver 2017). Therefore, we need stories that challenge dominant logics and routine thinking that diminishes and simplifies AI/human relations along dualistic lines. These systems deserve much richer stories and a richer legacy than they are currently getting.
1.3 An AI Politics for the Future
In this commentary piece, I have discussed the pitfalls of dualistic thinking in AI storytelling, and the problematic embedded power relations that come with such storytelling. Against this backdrop, I propose an AI politics to make new relations with AI possible for the future. We can only re-imagine our machines by engaging with them anew. To do this, a concrete set of strategies is necessary.
Remembering that AI needs to be destabilized as an object—considering that it is situated differently in different situations—we need an AI politics that starts from this assumption. Consequently, to learn about AI/human relations, researchers and developers need to focus on real-world practices and actual encounters between AI and humans, rather than assuming their relations beforehand or taking for granted certain characteristics belonging to certain entities. One way to work against the grain and challenge dualistic logics is to engage in serious attentiveness (van Dooren 2020) when looking at real-world practices where AI is involved. This means paying attention as best we can to be able to find out what our AI systems are up to in a particular situation. It is not simply a matter of looking closely at something, but slowing down our pace and being open to the unexpected and surprising (Stengers 2018) in our encounters with AI. The idea of paying serious attention offers a possibility to develop our ideas (Stengers 2015) and nurture the art of noticing (Tsing 2015). It allows us to think again, be inventive, and be curious—in other words, to show a real serious interest in our machines. Such serious attentiveness can help in our re-imagination of AI in ways that embrace, rather than reduce, real-world complexity and encourage richer AI imaginations and storytelling beyond dualistic thinking. Each situation when we encounter an AI system is unique and deserves to be explored in the light of its own particularities and specificities. This also means getting comfortable with uncertainty, which in turn opens up a range of possibilities for becoming and understanding in new ways. These situated details matter, and with them the complexity of the world increases.
Engaging in such an AI politics means taking AI and human interconnectedness seriously, telling stories of collaboration, co-existence, and co-evolution that come about in and through AI/human symbiotic relations. Working against the grain can teach us new things about our world, and here imagination is crucial. As Ursula Le Guin reminds us, “one of the most deeply human, and humane […] faculties is the power of imagination” (Barr 2018). Think differently we must!
Data availability statement
Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.
References
Akrich M (1992) The De-scription of Technical Objects. In: Law J, Bijker W (eds) Shaping Technology/Building Society. MIT Press, pp 205–224
Barad K (2007) Meeting the universe halfway: Quantum physics and the entanglement of matter and meaning. Duke University Press, US
Bareis J, Katzenbach C (2021) Talking AI into beings: the narratives and imaginaries of national AI strategies and their performative politics. Sci Technol Human Values. https://doi.org/10.1177/01622439211030007
Barr SM (2018) Ursula K. LeGuin: an anthropologist of other worlds. Nature 555:29. https://doi.org/10.1038/d41586-018-02439-7
Baum S (2018) Superintelligence scepticism as a political tool. Information 9:209
Bijker EW, Hughes T, Pinch T (1987) The Social Contruction of Technological Systems. MIT Press, US
Bostrom N (2014) Superintelligence: Paths, Danger, Strategies. Oxford University Press, UK
Boyd B (2009) On the Origin of Stories: Evolution, Cognition, and Fiction. Harvard University Press, US
Campolo A, Crawford C (2020) Enchanted determinism: power without responsibility in Artificial Intelligence. Engag Sci Technol Soc 6:1–19. https://doi.org/10.17351/ests2020.277
Cave S, Craig C, Dihal K, Dillon S, Montgomery J, Singler B, Taylor L (2018) Portrayals and Perceptions of AI and Why They Matter. Retrieved from https://royalsociety.org/~/media/policy/projects/ai-narratives/AI-narratives-workshop-findings.pdf
Cave S, Dihal K (2019) Hopes and fears for intelligent machines in fiction and reality. Nat Mach Intell 1:74–78. https://doi.org/10.1038/s42256-019-0020-9
Cave S, Dihal K, Dillon S (2020) AI narratives: A history of imaginative thinking about intelligent machines. Oxford University Press, UK
Cellan-Jones R (2014) Stephen Hawking Warns Artificial Intelligence Could End Mankind. BBC News Technology., December, 2. http://www.bbc.com/news/technology-30290540. Accessed 2 January, 2020.
van Dooren T, Bird Rose D (2016) Lively ethnography: storying animist worlds. Environ Human 8(1):77–94. https://doi.org/10.1215/22011919-3527731
Dourish P, Gómez-Cruz E (2018) Datafication and data fiction: Narrating data and narrating with data. Big Data Soc. https://doi.org/10.1177/2053951718784083
Fast E, Horvitz E (2016) Long-term trends in the public perception of artificial intelligence. (Preprint at https://arxiv.org/abs/1609.04904).
Fischer B, Östlund B, Peine A (2020) Of robots and humans: Creating user representations in practice. Soc Stud Sci 50(2):221–244
FoLI (2015). Future of life institute, autonomous weapons: an open letter from AI & robotics researchers. Retrieved 2022-09-13 from; http://futureoflife.org/open-letter-autonomous-weapons/.
Gottschall J, Wilson DS (2005) The Literary Animal: Evolution and the Nature of Narrative. Northwestern University Press, US
Haraway D (1988) Situated knowledges: the science question in feminism and the privilege of partial perspective. Fem Stud 14(3):575–599
Haraway D (1989) Primate Visions: Gender Race and Nature in the World of Modern Science. Routledge and Chapman Hall, UK
Haraway D (2016) Staying with the Trouble: Making Kin in the Chthulucene. Duke University Press, US
Haraway D (2018) Staying with the trouble for multispecies environmental justice. Dialogues Hum Geogr 8(1):102–105
Hermann I (2020) Beware of fictional AI narratives. Nat Mach Intell 2:654. https://doi.org/10.1038/s42256-020-00256-0
Johnson D, Verdicchio M (2017) Reframing AI Discourse. Mind Mach 27:575–590
Latour B (1993) We Have Never Been Modern. Harvard University Press
Latour B (2000) When things strike back: A possible contribution of “science studies” to the social sciences. Br J Sociol 51(1):107–123
Lee F (2021) Enacting the pandemic: analyzing agency, opacity, and power in algoritmic assemblages. Sci Technol Stud 34(1):65–90
Muniesa F (2019) Societe du comportement, information de la sociologie. Zilzel 1(5):196–207
Recchia G (2020) The Fall and Rise of AI Investigating AI Narratives with Computational Methods. In: Cave S, Dihal K, Dillon S (eds) AI Narratives A History of Imaginative Thinking About Intelligent Machines. Oxford University Press, UK, pp 382–408
Sabanovic ́ S (2010) Robots in society, society in robots: mutual shaping of society and technology as framework for social robot design. Int J Social Robot 2(4):439–450
Sartori L, Theodorou A (2022) A sociotechnical perspective for the future of AI: narratives, inequalities, and human control. Ethics Inf Technol 24:4. https://doi.org/10.1007/s10676-022-09624-3
Seaver N (2017) Algorithms as culture: Some tactics for the ethnography of algorithmic systems. Big Data Soc. https://doi.org/10.1177/2053951717738104
Smith D, Schlaepfer P, Major K, Dyble M, Page E, Abigail Thompson J, Chaudhary N, Salali GD, Mace R, Astete L, Ngales M, Vinicius L, Bamberg Migliano A (2017) Cooperation and the evolution of hunter-gatherer storytelling. Nature Commun 8:1853. https://doi.org/10.1038/s41467-017-02036-8
Stengers I (2015) In Catastrophic Times: Resisting the Coming Barbarism. Open Humanities Press, London
Stengers I (2018) Another Science if Possible: A Manifesto for Slow Science. Polity Press, UK
Tsing A (2015) The mushroom at the end of the world: On the possibility of life in capitalist ruins. Princeton University Press, US
van Dooren T (2020) Story (telling). Swamphen: J Cult Ecol 7: 1–2.
Acknowledgements
This research was funded by the Swedish Research Council under the Grant no. 2019-00697.
Funding
Open access funding provided by Stockholm University.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The author states that there is no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Dahlin, E. Think Differently We Must! An AI Manifesto for the Future. AI & Soc 39, 1423–1426 (2024). https://doi.org/10.1007/s00146-022-01620-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00146-022-01620-x