Abstract
This study examines the policy discussions surrounding the purpose of the development and use of an emerging technology. It applies the two stylized technology policy frames of economic growth and societal challenges to analyse framing of one of the key emerging technologies today—Artificial Intelligence (AI). It demonstrates that recent AI policy documents include both—economic growth as well as societal challenges—frames. While AI is a novel technology, its recent policy builds on traditional ideas about the role of technology in facilitating economic growth and competitiveness supported by well-known measures such as investment in research and highly skilled workforce. Additionally, AI policy draws on more recent frame on the contribution of technology to addressing societal challenges and the Sustainable Development Goals but presents AI as a technological solution to complex societal issues. While some interest in addressing both economic and social objectives in AI policy can be observed, the policy documents remain silent about their compatibility.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
One of the key emerging technologies of the twenty-first century—Artificial Intelligence (AI)—has been surrounded by major policy discussions about its benefits and challenges, as evidenced by national and international strategies, reports and policy papers launched by governments, international organizations, consultancies and civil society organizations in recent years. These AI policy documents have defined priorities, outlined opportunities and risks and developed recommendations for the governance of development and use of AI (af Malmborg & Trondal, 2021; Bareis & Katzenbach, 2022; Dexe & Franke, 2020; Djeffal et al., 2022; Filgueiras, 2022; Guenduez & Mettler, 2022; Ossewaarde & Gulenc, 2020; Paltieli, 2021; Radu, 2021; Roberts et al., 2021; Ulnicane et al., 2021a, 2021b, 2022). As many countries and organizations have launched their documents around the same time, there has been a lot of cross-national and cross-organizational policy learning (Dolowitz & Marsh, 2000), which has led to some convergence in terms of the key themes and principles but also important divergence in terms of priorities, breadth and understanding of common themes and principles not only across countries but also across different types of organizations (Jobin et al., 2019; Schiff et al., 2021; Ulnicane et al., 2021a, 2022).
This study contributes to research on AI policy debates by looking on how they articulate the purpose of AI development and use. To do that, it draws on the studies of the two major frames of technology policy, namely its contribution to economic competitiveness and societal challenges (Diercks et al., 2019; Mazzucato, 2021; Schot & Steinmueller, 2018; Ulnicane, 2016). According to the first frame, technology is expected to contribute to economic growth and competitiveness. In contrast, the second frame highlights the potential of technology for tackling Grand societal challenges in areas such as health, environment and energy as well as achieving the United Nations’ Sustainable Development Goals. This research applies the two technology policy frames to analyse how the purpose of AI development and use is discussed in AI policy. It examines AI policy documents to answer the main research question: How do they frame the purpose of AI development and use? The three sub-questions are as follows: Do AI policy documents focus on a traditional technology policy frame prioritizing economic growth or an emerging paradigm of addressing societal challenges? What is the relationship between these two frames in AI policy? What are omissions and silences in defining the purpose in AI policy?
To examine AI policy discussions, this study uses policy framing approach, which focusses on how problems and their potential solutions are articulated and interpreted in policy debates (Head, 2022; Rein & Schon, 1993, 1996; Schon & Rein, 1994). It explores the two policy frames empirically by analysing AI policy documents launched by national governments, international organizations, civil society organizations and consultancies.
This study aims to contribute to the topic of this special issue on the global governance of emerging technologies by deepening our understanding of the ideational dimension of public policy. While recent studies of emerging technologies such as AI have strongly focussed on ethical and regulatory issues or their economic impacts, critical analysis of policy aims and priorities has been largely missing. By undertaking an in-depth analysis of competing AI policy frames, this research sheds a light on policy discussions and political choices surrounding emerging technologies representing variety of values, ideologies and interests co-shaping development and deployment of these technologies. It draws on insights and concepts from a number of disciplines and research fields including policy analysis and Science and Technology Studies to highlight that emerging technologies also serve as political battlegrounds about desirable and possible futures. Thus, this research aims to make a conceptual contribution to the studies of global governance of emerging technologies (Kuhlmann et al., 2019; Taeihagh, 2021; Taeihagh et al., 2021), supported by empirical insights from recent AI policy.
This paper proceeds as follows: Sect. 2 introduces conceptual framework presenting AI as an emerging technology, policy framing approach and two technology policy frames; Sect. 3 discusses insights from examining frames in AI policy documents; and finally, in Conclusions, the main findings are summarized.
2 Conceptual framework: emerging technology and policy framing
To examine policy framing of the purpose of AI development and use, the conceptual framework of this paper consists of the three main elements: first, the concept of AI and approaching AI as an emerging technology; second, policy framing approach, and third, the two main technology policy frames of economic competitiveness and societal challenges.
2.1 Artificial Intelligence as an emerging technology
Although the term ‘Artificial Intelligence’ has been widely used over the recent years, experts and policy-makers highlight the difficulties to define AI. AI policy documents emphasize the challenge of pinning down a precise definition of AI (The 2015 panel, 2016) and the continuous debate on this topic over many years (European Commission, 2017). In AI literature and policy documents, multiple definitions of AI can be found. AI experts, who undertook a dedicated study of how to define AI, came up with the following definition:
Artificial intelligence (AI) systems are software (and possibly also hardware) systems designed by humans that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected structured or unstructured data, reasoning on the knowledge, or processing the information, derived from this data and deciding the best action(s) to take to achieve the given goal. AI systems can either use symbolic rules or learn a numeric model, and they can also adapt their behaviour by analysing how the environment is affected by their previous actions. (European Commission, 2019: 6).
It is acknowledged that AI includes ‘a broad set of approaches, with the goal of creating machines with intelligence’ (Mitchell, 2019: 8). AI includes approaches and techniques, such as machine learning, machine reasoning and robotics (European Commission, 2019). Accordingly, in setting boundaries of what counts as AI policy, this paper follows actors’ definitions of AI considering how policy-makers and other stakeholders understand and use the term AI.
While the term AI has existed for over 60 years, real world applications have only accelerated over the last decade due to advance in computing power, availability of data and better algorithms (Campolo et al., 2017; European Commission, 2018a). Due to these recent advances, AI today exhibits typical characteristics of emerging technologies, such as radical novelty, relatively fast growth, prominent impacts, uncertainty and ambiguity (Rotolo et al., 2015), hypes and high positive and negative expectations (Van Lente et al., 2013), and specific needs for a tentative governance to address high uncertainty (Kuhlmann et al., 2019). Hypes and high positive and negative expectations associated with emerging technologies can be seen in AI policy documents, which present AI as revolutionary, transformative and disruptive technology (Ulnicane et al., 2022) but also highlight concerns and challenges including safety, privacy and accountability (Ulnicane et al., 2021b).
Importantly for this study of framing the purpose of AI development and use, AI as any technology is seen as being co-shaped by society and values it is embedded in and thus having important political, social and cultural aspects (Jasanoff, 2016; Schatzberg, 2018; Winner, 2020). It is not just a neutral tool serving the goals defined by others (Hare, 2022; Schatzberg, 2018; Stilgoe, 2020) but represents collectively designed future ways of living, power relations and value systems (Ulnicane et al., 2022).
2.2 Policy framing approach
Policy framing approach (Head, 2022; Rein & Schon, 1993, 1996; Schon & Rein, 1994) offers a productive way to analyse policy debates. It focusses on how in policy practice, policy stories influence the shaping of laws, regulations, allocation decisions, institutional mechanisms and incentives. Policy frames help to structure and inform policy debates and practice situated in a specific political and historical context. According to Martin Rein and Donald Schon (1993),
framing is a way of selecting, organizing, interpreting, and making sense of a complex reality to provide guideposts for knowing, analysing, persuading, and acting. A frame is a perspective from which an amorphous, ill-defined, problematic situation can be made sense of and acted on’ (Rein & Schon, 1993: 146) and in such frames ‘facts, values, theories, and interests are integrated (Rein & Schon, 1993: 145).
Policy frames are ‘diagnostic/prescriptive stories that tell, within a given issue terrain, what needs fixing and how it might be fixed’ (Rein & Schon, 1996: 89). Analysis of policy framing helps to demystify political rhetoric and problematise how policy problems are defined, debated and acted upon (Head, 2022). This paper examines rhetorical frames, which ‘are constructed from the policy-relevant texts that play important roles in policy discourse, where the context is one of debate, persuasion, or justification’ (Rein & Schon, 1996: 90). However, when analysing rhetorical frames, it is important to examine not only what is said but also omissions, silences and kinds of politics hidden in the framing (Bacchi, 2000). According to Carol Bacchi (2000), it is necessary ‘to recognize the non-innocence of how ‘problems’ get framed within policy proposals, how the frames will affect what can be thought about and how this affects possibilities for action’ (Bacchi, 2000: 50).
Rein and Schon associate policy frames with public controversies and pluralism, as ‘in any given issue terrain, there are almost always a variety of frames competing for both meaning and resources’, where ‘the contest over meaning gives legitimacy to the claim for economic and social resources’ (Rein & Schon, 1996: 95). According to Schon and Rein, these situated policy controversies with their competing frames structure policy debates and practices and shape the design of policies (Schon & Rein, 1994). For them, design of policy is a social and political process involving divergent interests and powers of actors. In their approach to policy design, Schon and Rein emphasize interaction of multiple designers, redesign in use and shifting contexts.
The concept of policy frames as well as related notions of policy paradigms, discourses and narratives have been productively applied to analyse technology policy (see, e.g., Diercks et al., 2019; Mitzner, 2020; Ulnicane, 2016), governance of emerging technologies (Jasanoff, 2003), and more recently AI policy (see, e.g., Köstler & Ossewaarde, 2022; Nordström, 2021; Ulnicane et al., 2021a, 2022). While previous studies of framing AI policy have focussed on governance, uncertainty and national policy, this paper contributes by exploring policy controversies of framing the purpose for AI development and use.
2.3 Shifting frames of technology policy
Technology policy globally is undergoing major changes in framing (Diercks et al., 2019; Mazzucato, 2021; Schot & Steinmueller, 2018; Ulnicane, 2016). Traditionally, technology policy largely focussed on economic growth, productivity and competitiveness and was justified by market failures and system failures requiring government intervention in times when market did not provide sufficient support, investment and networks for the development and use of new technologies. Recently, the key assumptions of this frame have increasingly been challenged arguing that technology development should be directed towards societal objectives known as Grand societal challenges and the United Nations Sustainable Development Goals in the areas such as climate change, health and poverty reduction (Diercks et al., 2019; Mazzucato, 2021; Schot & Steinmueller, 2018; Ulnicane, 2016). Rather than fully replacing previous economically oriented technology policy frame, new focus on societal challenges can be seen as layering process when old and new technology paradigms co-exist and in practice sometimes overlap.
Elements of both of these technology policy frames are part of ongoing discussions about AI that cover a broad range of issues from economic competitiveness (Justo-Hanani, 2022; Ulnicane et al., 2021b, 2022) depicting global AI development as a new space race (Ulnicane, 2022) or a new cold war (Bryson & Malikova, 2021) to AI potential contribution to sustainability, environmental and social goals (Sætra, 2021; van Wynsberghe, 2021; Vinuesa et al., 2020). It is useful to have a closer look at the key elements of these two stylized technology policy frames, so that it can later be examined how do they play out in AI policy debates. While technology policy frames focus on a range of questions including objectives and organization of technology development and use as well as policy instruments to support it, to answer the research question of this study, this paper highlights how different frames articulate the purpose of technology policy.
Technology policy emerged as a separate policy field in the 1950 and 1960s (Godin, 2004; Mitzner, 2020; Schot & Steinmueller, 2018). Since then, technology policy has been closely linked to economic policy, prioritizing contribution of technology to national economic objectives such as growth, productivity and competitiveness. While evidence of links between technology, growth and productivity has been questioned (Godin, 2004), this frame has become very influential and has been diffused internationally by the Organization for Economic Cooperation and Development (Godin, 2004; Henriques & Larédo, 2013).
An important element of traditional economic framing of technology policy is its focus on national competitiveness. It depicts technology development internationally as competition where one country is winning and acquiring political, military and economic superiority, while others are losing and are left behind. There are many examples of an economic competitiveness discourse claiming that other countries are more advanced in technology development. For example, during the twentieth century, the perception in Great Britain has been that other countries such as Germany, the United States, the Soviet Union and Japan are technologically superior (Edgerton, 2019). Such sentiments, that other countries are better at technology development, are typically accompanied by calls to national governments to support technology development with more investment and other policy measures. Major investments in the US technology followed the fears about the Soviet supremacy in space technology in the late 1950s and the worries about the Japanese technological supremacy in the 1980s (O’Mara, 2019). The gradual emergence and expansion of supranational European Union’s technology policy since the 1960s has been largely driven by concerns about Europe’s technology gap with the US, then Japan and recently China (Mitzner, 2020). These ideas have also become popular in policy discussions surrounding AI where it is argued that the development of AI is largely driven by the rivalry between the two major AI superpowers of the US and China (Lee, 2018). While the economic competitiveness discourse is very popular and plays a major role in technology policy, it has been criticised. Paul Krugman (1994) has argued that it is misleading because states do not compete the same way as corporations and international development is not necessarily a zero-sum game where one country wins and others loose; it can also be a positive sum game where many can benefit from technological advances elsewhere.
In the early twenty-first century, traditional technology policy frame with its objective to contribute to economic growth has been increasingly challenged. In the context of climate change and escalating societal concerns, having economic growth as a key objective has been questioned (De Saille et al., 2020). Instead, the idea that technology policy should tackle the so-called Grand societal challenges in the areas such as environment, energy and health have gained increasing prominence around the world (Boon & Edler, 2018; Diercks et al., 2019; Kaldewey, 2018; Kaltenbrunner, 2020; Ludwig et al., 2022; Ulnicane, 2016; Wanzenbock et al., 2020). To address complex societal challenges, it is argued that boundary spanning collaborations are needed that bring together heterogeneous partners from diverse disciplines and sectors including science, business, policy-makers and civil society (Ulnicane, 2016). Despite the widely shared recognition that initiatives addressing societal challenges require inclusion and participation of a broad range of stakeholders, concerns have been raised that in practice still dominant actors and their perspectives might get prioritized (Ludwig et al., 2022). Moreover, while some argue that Grand challenges span national borders and, therefore, require global collaborations, others emphasize their context-specificity and argue for local initiatives to address them (Wanzenbock et al., 2020).
Although the discourse of Grand societal challenges builds on earlier ideas such as social function of science (Bernal, 1939), the past two decades have seen the launch of dedicated initiatives to tackle Grand challenges from national governments, international organizations, universities, research institutes and academic associations (Kaldewey, 2018; Ulnicane, 2016). Focus on tackling Grand challenges discourse are part of transformative technology policy and initiatives to achieve the Sustainable Development Goals by mission-oriented policies (Mazzucato, 2021; Schot & Steinmueller, 2018). If traditional technology policy frame focusses on addressing supply-side, then demand-side is prioritised in challenge- and mission-oriented policies (Boon & Edler, 2018; Diercks et al., 2019). Idea that technologies should be developed according to societal needs and values is at the core of Responsible Research and Innovation concept that since 2010 has played an important role in technology policy in Europe (De Saille, 2015; Owen et al., 2021; Stilgoe et al., 2013). While in recent technology policy Grand challenges are typically understood as societal challenges of broad social relevance, on some occasions the term of Grand challenges has also been used to describe purely scientific and technological challenge (Ulnicane, 2016) including technological competitions such as DARPA (Defence Advanced Research Projects Agency) Grand Challenge (Kaldewey, 2018).
Despite inspirational discourses surrounding Grand challenge initiatives, it is recognized that tackling Grand challenges is an uncertain, open-ended and highly complex endeavour and its successful outcome cannot be guaranteed (Diercks et al., 2019; Kaldewey, 2018; Ludwig et al., 2022; Ulnicane, 2016; Wanzenbock et al., 2020). Moreover, technology does not necessarily play the main role in addressing complex challenges such as climate change, which also require economic, political, institutional, social and other changes. Grand Challenges are seen as ‘wicked problems’ (see, e.g., Kaldewey, 2018; Ludwig et al., 2022; Wanzenbock et al., 2020). Horst Rittel and Melvin Webber (1973) argued that nearly all public policy issues are ill-defined ‘wicked problems’, which differ significantly from definable and solvable problems in the natural sciences (Rittel & Webber, 1973). ‘Wicked problems’ are unruly and intractable problems, characterized by their complexity, uncertainty and value divergence (Head, 2019, 2022; Peters, 2017). Brian Head suggests that ‘the governance of wicked problems is less about designing elegant science-based solutions and more about implementing ‘coping’ strategies, which manage uncertainties, strengthen community capabilities and build resilience across all sectors—social, economic and environmental’ (Head, 2022: 61).
Each technology policy frame is based on a different idea of technology and innovation (Diercks et al., 2019). Traditional frame focussing on economic growth has a strong pro-innovation bias and assumes that technology always has positive outcomes. In contrast, challenge-oriented policy recognizes that technology can have positive as well as negative outcomes on environment, health and equality (Coad et al., 2021; Edgerton, 2019; Stilgoe, 2020). These questions have featured prominently in AI debates about the positive and negative impacts of AI including on jobs, democracy and justice (see, e.g., Crawford, 2021; Eubanks, 2019; Pasquale, 2015; Zuboff, 2019).
The recent rise of challenge-oriented policy has been described as a ‘normative turn’ when policy not only optimize the innovation system to improve economic competitiveness and growth but also induce strategic directionality and guide processes of transformative change towards desired societal objectives (Diercks et al., 2019: 884). However, describing the recent emergence of challenge-oriented policies as a ‘normative turn’ is misleading because it implies that traditional policy focussing on economic growth and competitiveness is purely technocratic, value-neutral and non-normative. It is important to recognize that both technology policy frames are normative and based on political choices about which values and norms to prioritize and support with public resources and other measures. Prioritizing and providing political support for policy that promotes economic growth, competitiveness, efficiency and productivity is also a highly normative political choice based on certain values, expectations and norms. Thus, focussing on diverse frames of technology policy highlights political aspects of technology and its policy drawing attention to mutual shaping of technologies and politics in terms of values, distribution of power and desirable futures (Jasanoff, 2016; Winner, 2020). These political aspects are also highly important in understanding contestations and controversies that currently surround AI development.
While there is a lot of variation in each of the two main technology frames (Diercks et al., 2019) introduced here, for the purposes of this paper, two stylized frames—one traditional based primarily on ideas about centrality of economic growth and competitiveness and another one focussing on Grand challenges and Sustainable Development Goals—are examined. Although AI policy documents cover a broad range of topics including impacts of AI on jobs, security and risks, this paper focusses on how do these documents articulate the overarching objectives of AI development and use according to the two stylized technology policy frames outlined above.
3 Empirical insights on framing the purpose of AI development and use
To provide insights on how the purpose of AI development and use is framed, this study examines AI policy documents. Policy documents here ‘are treated as vehicles of messages, communicating or reflecting official intentions, objectives, commitments, proposals, ‘thinking’, ideology and responses to external events’ (Freeman & Maybin, 2011: 157). They are seen as policy-relevant texts that play important roles in policy discourse and debate, persuasion, or justification (see above on rhetorical frames).
3.1 Methods and data sources
This article examines a pre-existing dataset of AI policy documents (Ulnicane et al., 2021a) that includes 49 policy documents (see Annex 1) launched by national governments, international organizations, consultancies and think tanks in the European Union and the United States from 2016 to 2018, namely, during the time when the main initial AI policy documents were launched around the world. These documents have been selected according to a number of criteria such as strong focus on overarching AI policy and being a stand-alone and self-contained document (for more on dataset, see Ulnicane et al., 2021a). The focus here is on AI policy documents rather than ethics guidelines, which are analysed elsewhere (see, e.g., Jobin et al., 2019; Schiff et al., 2021); however, it has to be admitted that there is some overlap between the two, e.g., some policy documents also include ethical principles.
For the purpose of this study, these documents have been analysed in line with the above outlined research questions and conceptual framework, namely, how they frame the purpose of AI development and use in line with the two stylized technology policy frames of economic growth and societal challenges. In particular, focus here is on common features how different policy documents frame the purpose of AI.
3.2 Economic growth and competitiveness frame
When reading AI policy documents, it is possible to find evidence for both stylized policy frames—prioritizing economic growth as well as societal challenges. Ideas from traditional economic frame are highly visible in AI policy. AI is presented as a driver of economic growth and a major economic opportunity, which should be fully exploited to reap the economic benefits of AI. Positive influence on economic growth is seen as one of the main benefits of AI expecting that ‘AI has the potential to create a new basis for economic growth and to be a main driver for competitiveness’ (European Commission, 2017: 4). Some documents mention specific forecasts about AI influence on the growth rates. For example, the US Executive Office of the President (2016a: 6-7) states that ‘AI has the potential to double annual economic growth in the countries analysed by 2035’, while the report from the UK All-Party Parliamentary Group on AI includes an estimate that ‘AI will boost economic growth in the UK by adding £140 billion to the UK economy by 2034, and boost labour productivity by 25% across all sectors, including in Britain’s strong pharmaceutical and aerospace industries’ (Big Innovation Centre/All-Party Parliamentary Group on Artificial Intelligence, 2017b: 23).
Similarly, in other documents, increases in economic growth due to AI are mentioned next to the boosting of productivity, efficiencies and cost savings (see, e.g., European Commission, 2018a; House of Lords, 2018). Focus on economic growth also includes positive expectations about potential contribution of AI to new ideas and innovation (European Commission, 2018a) and optimism about the promise of technological innovation (Thierer et al., 2017), thus making explicit the pro-innovation bias of economic growth discourse.
An important part of the discourse about economic growth potential of AI is the focus on economic competitiveness depicting AI development as taking place ‘amid fierce global competition’ (European Commission, 2018b: 2). AI advancements are seen as boosting competitiveness around the world from increasing and maintaining US national competitiveness (Executive Office of the President, 2016c) to improving the EU’s competitiveness (European Economic and Social Committee, 2017). To fully exploit AI contribution to competitiveness, policy documents make a number of policy recommendations. Greater federal investment in AI research and development is seen as essential to maintain US competitiveness (IEEE-USA, 2017), while providing qualified workforce is presented as an urgent issue to maintain the EU competitiveness (IEEE European Public Policy Initiative, 2017) and reforming tax frameworks is suggested to assure UK’s global competitiveness (Big Innovation Centre/All-Party Parliamentary Group on Artificial Intelligence, 2017d). On the other hand, policy discussions tend to present regulation as potentially damaging for competitiveness, associating it with regulatory burden and, for example, claiming that AI regulation could reduce innovation and competitiveness for UK industry (House of Lords, 2018). The main exception here are documents launched by the European Commission, which present a solid European ethical and regulatory framework as a prerequisite and unique feature of the EU within the global AI competition (European Commission, 2018b).
An important part of economic competitiveness discourse is fear of lagging behind and missing out on opportunities offered by AI revolution. This is the case with the European Commission (2018b), which points out that the EU is behind in private investments in AI, compared with Asia and North America. Therefore, the European Commission presents that it is crucial for the EU to create an environment that stimulates investments and uses public funding to leverage private investments as well as to build on its assets such as world-leading AI research community (European Commission, 2018b). The need to take measures to be competitive is presented as urgent and essential, as can be seen in this quote:
One of the main challenges for the EU to be competitive is to ensure the take-up of AI technology across its economy. European industry cannot miss the train. (European Commission, 2018b: 5)
Not undertaking necessary measures is associated with missing the benefits of AI and negative consequences, as suggested here ‘without such efforts, the EU risks losing out on the opportunities offered by AI, facing a brain-drain and being a consumer of solutions developed elsewhere’ (European Commission, 2018b: 5). Thus, in the case of emerging technology of AI, policy tends to be framed in a traditional discourse about economic competitiveness and fears about being left behind other countries and regions, which are perceived as technologically superior. To sum up, traditional technology policy frame with its focus on contribution of technology to economic growth, productivity and competitiveness is strongly present in the way AI policy documents frame the purpose of AI development and use.
3.3 Societal challenges frame
In addition to the focus on traditional economic growth and competitiveness frame, policy documents also emphasize the potential of AI to contribute to solving a range of societal problems. They highlight that AI should only be developed and used in ways that serve global social and environmental good (European Group on Ethics in Science and New Technologies, 2018) and should enable the achievement the UN Sustainable Development Goals that concern eradicating poverty, illiteracy, gender and ethnic inequality, and combating the impact of climate change (IEEE, 2017). AI is expected to ‘be central to the achievement of the Sustainable Development Goals (SDGs) and could help to solve humanity’s grand challenges by capitalizing on the unprecedented quantities of data now generated on sentient behaviour, human health, commerce, communication, migration and more’ (International Telecommunication Union, 2017: 6).
Policy documents include very positive statements about the role of AI in solving a range of major societal challenges ‘AI is helping us to solve some of the world’s biggest challenges: from treating chronic diseases or reducing fatality rates in traffic accidents to fighting climate change or anticipating cybersecurity threats’. (European Commission, 2018b: 2).
The European Commission claims that there are many examples ‘of what we know AI can do across all sectors, from energy to education, from financial services to construction. Countless more examples that cannot be imagined today will emerge over the next decade’ (European Commission, 2018b: 2). These strong and highly optimistic claims about the AI solving societal challenges ignore that, as explained earlier, addressing challenges such as climate change and global health is highly complex and uncertain ‘wicked problem’, success cannot be guaranteed and technology is not the only or even the main ‘solution’. Somewhat more cautious tone about potential of AI to address societal challenges can be found in several reports that recommend to carry out studies not only on strengths but also on weaknesses of using AI for achieving the SDGs (IEEE, 2017; Villani, 2018).
In AI policy, having inclusive and participatory governance bringing together diverse stakeholders nationally and internationally is seen as necessity for addressing societal challenges. Policy documents suggest that use of AI for facilitating societal benefits should be based on deliberative democratic processes and global effort towards equal access to AI and fair distribution of benefits and equal opportunities across and within societies (European Group on Ethics in Science and New Technologies, 2018). When discussing the role of international fora such as G7/G20, United Nations and Organization for Economic Cooperation and Development in AI policy, the European Commission states that the EU 'will promote the use of AI, and technologies in general, to help solve global challenges, support the implementation of the Paris Climate agreement and achieve the United Nations Sustainable Development Goals'. (European Commission, 2018b: 19).
The AI for Good Global Summit Report in 2017 emphasizes that diverse range of people, including the most vulnerable ones should be at the centre of designing AI to tackle SDGs and suggests to create a repository of case studies, activities, partnerships and best practices that would be a resource to understand how different stakeholders are solving Grand challenges using AI (International Telecommunication Union, 2017). While inclusive governance is seen as important for using AI to address societal issues, insights from practice suggest that deliberative forums can be captured by vested interests of the most resourceful actors (Ulnicane, 2021a; b).
The concept of Grand challenges in AI policy documents is used not only to describe issues of broad social relevance but also in a narrower sense. In the UK Industrial Strategy, AI is identified as one of four Grand challenges (other three being future of mobility, clean growth and ageing society) in which the UK can lead the world in the years to come (HM Government, 2018). Yhis approach resembles a traditional sectoral policy rather than directing AI towards actually solving specific societal challenges. Occasions of understanding Grand challenge in AI policy more as technological rather societal challenge include describing creation of computer which could win at Go as an uncompleted Grand challenge in AI (The Royal Society, 2017) or mentions of initiatives such as DARPA’s Cyber Grand Challenge that involved AI agents autonomously analysing and countering cyberattacks or the Camelyon Grand Challenge for metastatic cancer detection (Executive Office of the President, 2016c).
To sum up, recent policy frame focussing on contribution of technology to addressing Grand societal challenges and Sustainable Development Goals can be found in the optimistic statements in AI policy documents about the potential of AI to address most pressing social issues today. However, in these documents, AI is typically presented as a simple technological fix to social issues, largely ignoring uncertainty and complexity of such ‘wicked’ problems.
3.4 Can economic and societal frames be combined?
In AI policy documents, the two policy frames of economic and social goals are mentioned next to each other (see, e.g., European Commission, 2018b; HM Government, 2018) suggesting that they are seen as complementary and compatible rather than as competing alternatives excluding each other. For example, the US National AI Research and Development Strategic Plan states that ‘AI advancements are providing many positive benefits to society and are increasing US national competitiveness’ (Executive Office of the President, 2016c), while the European Group on Ethics in Science and New Technologies highlights that ‘Artificial intelligence, robotics and ‘autonomous’ systems can bring prosperity, contribute to well-being and help to achieve European moral ideals and socio-economic goals if designed and deployed wisely’. (European Group on Ethics in Science and New Technologies, 2018: 20).
Some documents come up with suggestions of paradigm shifts combining growth and energy efficiency, as can be seen in this quote from a French document:
A truly ambitious vision for AI should therefore go beyond mere rhetoric concerning the efficient use of resources; it needs to incorporate a paradigm shift toward a more energy-efficient collective growth which requires an understanding of the dynamics of the ecosystems for which this will be a key tool. We should take the opportunity to think of new uses for AI in terms of sharing and collaboration that will allow us to come up with more frugal models for technology and economics. (Villani, 2018: 102)
In the quote above, an idea about paradigm shift and new models for technology and economics is mentioned rather briefly, without much elaboration what it would entail. This is a typical feature of policy documents that intentions and objectives are just mentioned without going into further discussion and reflection if and how economic growth is compatible with societal challenges, when and under what conditions are they complementary or in tension with each other, and what are the potential conflicts between the two. The question of the compatibility of the two frames is an important omission in the AI policy documents. Thus, crucial AI policy controversies remain implicit and silent: does focus on economic growth imply neglect of addressing societal challenges? Is focus on societal challenges compatible with current economic growth models? Is it possible for AI to address both—economic growth and societal challenges—and what kind of measures and trade-offs that would require? AI policy documents are largely silent about- diversity of values, norms and interests behind each of these frames, thus ignoring crucial questions about their desirability and feasibility.
4 Conclusions
This study examined the articulation of the purpose of developing and using an emerging technology by looking at policy frames surrounding AI as one of the key emerging technologies today. Using the two stylized technology policy frames—traditional frame focussing on contribution of technology to economic growth and competitiveness and a more recent one prioritizing contribution of technology to addressing societal challenges and the Sustainable Development Goals—, this research reveals layering of the two frames in AI policy where both economic growth as well as tackling societal challenges are discussed.
The insights from the policy documents demonstrate that, while AI is a novel technology, its policy includes a lot of ideas from the traditional frame that perceives an emerging technology as a source of economic growth, productivity and competitiveness, which could be further enhanced by such well-known measures as investments in research and skilled workforce. These measures are seen as important to avoid lagging behind other countries and missing out on opportunities offered by emerging technology. Thus, recent AI policy largely draws on a traditional policy frame about the need and measures to reap economic benefits of emerging technologies.
In addition to traditional economic ideas, AI policy documents also include elements from the recent technology policy frame highlighting importance of addressing societal challenges and the Sustainable Development Goals in areas such as energy, climate change and health and having participatory and inclusive governance to address them. However, in policy documents, AI is depicted as a simple technological solution to complex ‘wicked problems’, ignoring uncertainties involved and overstating the role of technology as the main or even the only solution to societal issues that require a broader range of political, economic, social and other measures.
While AI policy documents are optimistic that AI can address both—economic as well as societal—objectives, they are largely silent about the compatibility of the two. Although the initial idea for this research was to examine controversies the two frames, the examination of AI policy documents revealed that there is no open controversy. In the documents, the two frames are mentioned rather superficially, without much reflection on diversity of norms, values and interests they involve. Examining conceptual and practical synergies, trade-offs, conflicts and requirements of the well-intended but complex idea to combine economic and social objectives in AI development and use remains an important question for future research.
To summarize, this paper demonstrates that there is certain convergence in framing purpose of AI development and use in terms of contribution to economic growth and societal challenges in the initial AI policy documents from Europe and the US. Future studies would benefit from extending the empirical scope to additional AI policy documents from other regions like Asia, Latin America, Middle East and Africa (see, e.g., Adams, 2021; Filgueiras, 2022; Kim, 2021; Lee, 2018; Tan & Taeihagh, 2021) and from looking not only on converging features but also divergencies. Furthermore, an important avenue for future research would be to analyse how the rhetoric in AI policy documents is followed-up and implemented through specific AI policy actions and instruments. Additionally, in future it would be interesting to compare the framings found in AI policy to the discourses about other emerging technologies such as neurotechnology, biotechnology or quantum computing.
This research on AI policy frames contributes to an emerging research agenda on AI governance (see, e.g., Köstler & Ossewaarde, 2022; Radu, 2021; Taeihagh, 2021) that takes a critical lens to interrogate and demystify popular discourses like governing AI for growth, efficiency and competitiveness that presents them as technocratic and value-neutral. Instead, this research agenda highlights normative, social, political and power aspects of AI governance and discourses that support it. It reinvigorates some well-known and long-standing problematic issues in technology governance like focussing on technological fixes and solutions while having difficulties to deal with complex societal (‘wicked’) problems, as highlighted by David Collingridge already in his 1980 book on social control of technology:
Ask technologists to build gadgets which explode with enormous power or to get men to the moon, and success can be expected, given sufficient resources, enthusiasm and organization. But ask them to get food for the poor; to develop transport systems for the journeys which people want; to provide machines which will work efficiently without alienating the men who work them; to provide security from war, liberation from mental stress, or anything else where the technological hardware can fulfil its function only through interaction with people and their societies, and success is far from guaranteed (Collingridge, 1980:15).
References
Adams, R. (2021). Can artificial intelligence be decolonized? Interdisciplinary Science Reviews, 46(1–2), 176–197. https://doi.org/10.1080/03080188.2020.1840225
af Malmborg, F., & Trondal, J. (2021). Discursive framing and organizational venues: mechanisms of artificial intelligence policy adoption. International Review of Administrative Sciences. https://doi.org/10.1177/00208523211007533 Advance online publication.
Bacchi, C. (2000). Policy as discourse: what does it mean? Where does it get us? Discourse: Studies in the Cultural Politics of Education, 21(1), 45–57. https://doi.org/10.1080/01596300050005493
Bareis, J., & Katzenbach, C. (2022). Talking AI into Being: The narratives and imaginaries of national AI strategies and their performative politics. Science, Technology, & Human Values, 47(5), 855–881. https://doi.org/10.1177/01622439211030007
Bernal, J. D. (1939). The social function of science. The MIT Press. 1967.
Boon, W., & Edler, J. (2018). Demand, challenges, and innovation making sense of new trends in innovation policy. Science and Public Policy, 45(4), 435–447. https://doi.org/10.1093/scipol/scy014
Bryson, J., & Malikova, H. (2021). Is there an AI cold war? Global Perspectives, 2(1), 24803. https://doi.org/10.1525/gp.2021.24803
Coad, A., Nightingale, P., Stilgoe, J., & Vezzani, A. (2021). The dark side of innovation. Industry and Innovation, 28(1), 102–112. https://doi.org/10.1080/13662716.2020.1818555
Collingridge, D. (1980). The social control of technology. The Open University Press.
Crawford, K. (2021). The atlas of AI. Yale University Press.
De Saille, S. (2015). Innovating innovation policy: The emergence of ‘responsible research and innovation.’ Journal of Responsible Innovation, 2(2), 152–168. https://doi.org/10.1080/23299460.2015.1045280
De Saille, S., Medvecky, F., van Oudheusden, M., Albertson, K., Amanatudou, E., Birabi, T., & Pansera, M. (2020). Responsibility beyond growth. Bristol University Press.
Dexe, J., & Franke, U. (2020). Nordic lights? National AI policies for doing well by doing good. Journal of Cyber Policy, 5(3), 332–349. https://doi.org/10.1080/23738871.2020.1856160
Diercks, G., Larsen, H., & Steward, F. (2019). Transformative innovation policy: Addressing variety in an emerging policy paradigm. Research Policy, 48(4), 880–894. https://doi.org/10.1016/j.respol.2018.10.028
Djeffal, C., Siewert, M. B., & Wurster, S. (2022). Role of the state and responsibility in governing artificial intelligence: a comparative analysis of AI strategies. Journal of European Public Policy. https://doi.org/10.1080/13501763.2022.2094987 Advance online publication.
Dolowitz, D. P., & Marsh, D. (2000). Learning from abroad: the role of policy transfer in contemporary policy-making. Governance: an International Journal of Policy and Administration, 13(1), 5–24. https://doi.org/10.1111/0952-1895.00121
Edgerton, D. (2019). The shock of the old. Technology & global history since 1900. Profile Books.
Eubanks, V. (2019). Automating inequality. How high-tech tools profile, police and punish the poor. Picador.
European Commission (2019). A Definition of Artificial Intelligence: Main Capabilities and Scientific Disciplines. Independent High-level Expert Group on Artificial Intelligence set up by the European Commission. Retrieved September 2, 2022, from https://digital-strategy.ec.europa.eu/en/library/definition-artificial-intelligence-main-capabilities-and-scientific-disciplines
Filgueiras, F. (2022). Artificial intelligence policy regimes: comparing politics and policy to national strategies for artificial intelligence. Global Perspectives, 3(1), 32362. https://doi.org/10.1525/gp.2022.32362
Freeman, R., & Maybin, J. (2011). Documents, practices and policy. Evidence & Policy, 7(2), 155–170. https://doi.org/10.1332/174426411X579207
Godin, B. (2004). The new economy: What the concept owes to the OECD. Research Policy, 33(5), 679–690. https://doi.org/10.1016/j.respol.2003.10.006
Guenduez, A. A., & Mettler, T. (2022). Strategically constructed narratives on artificial intelligence: What stories are told in governmental artificial intelligence policies? Government Information Quarterly. https://doi.org/10.1016/j.giq.2022.101719
Hare, S. (2022). Technology is not neutral: A short guide to technology ethics. Publishing Partnership London.
Head, B. W. (2019). Forty years of wicked problems literature: Forging closer links to policy studies. Policy and Society, 38(2), 180–197. https://doi.org/10.1080/14494035.2018.1488797
Head, B. W. (2022). Wicked problems in public policy. Palgrave Macmillan.
Henriques, L., & Larédo, P. (2013). Policy-making in science policy: The ‘OECD model’ unveiled. Research Policy, 42(3), 801–816. https://doi.org/10.1016/j.respol.2012.09.004
Jasanoff, S. (2003). Technologies of humility: Citizen participation in governing science. Minerva, 41(3), 223–244. https://doi.org/10.1023/A:1025557512320
Jasanoff, S. (2016). The ethics of invention: Technology and the human future. WW Norton & Company.
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(2019), 389–399. https://doi.org/10.1038/s42256-019-0088-2
Justo-Hanani, R. (2022). The politics of Artificial Intelligence regulation and governance reform in the European Union. Policy Sciences, 55(1), 137–159. https://doi.org/10.1007/s11077-022-09452-8
Kaldewey, D. (2018). The grand challenges discourse: Transforming identity work in science and science policy. Minerva, 56(2), 161–182. https://doi.org/10.1007/s11024-017-9332-2
Kaltenbrunner, W. (2020). Managing budgetary uncertainty, interpreting policy. How researchers integrate “grand challenges” funding programs into their research agendas. Journal of Responsible Innovation, 7(3), 320–341. https://doi.org/10.1080/23299460.2020.1744401
Kim, J. (2021). Promoting the ICT Industry for the future with fears from the past. Science and Public Policy, 48(6), 889–899. https://doi.org/10.1093/scipol/scab056
Köstler, L., & Ossewaarde, R. (2022). The making of AI society: AI futures frames in German political and media discourses. AI & Society, 37(1), 249–263. https://doi.org/10.1007/s00146-021-01161-9
Krugman, P. (1994). Competitiveness: A dangerous obsession. Foreign Affairs, 73(2), 28–44.
Kuhlmann, S., Stegmaier, P., & Konrad, K. (2019). The tentative governance of emerging science and technology—A conceptual introduction. Research Policy, 48(5), 1091–1097. https://doi.org/10.1016/j.respol.2019.01.006
Lee, K. F. (2018). AI superpowers China, Silicon Valley and the new world order. Houghton Mifflin Harcourt.
Ludwig, D., Blok, V., Garnier, M., Macnaghten, P., & Pols, A. (2022). What’s wrong with global challenges? Journal of Responsible Innovation, 9(1), 6–27. https://doi.org/10.1080/23299460.2021.2000130
Mazzucato, M. (2021). Mission economy: A moonshot guide to changing capitalism. Allen Lane.
Mitchell, M. (2019). Artificial intelligence: A guide for thinking humans. Penguin Books.
Mitzner, V. (2020). European Union Research Policy. Constested origins. Palgrave Macmillan.
Nordström, M. (2021). AI under great uncertainty: Implications and decision strategies for public policy. AI & Society. Advance Online Publication. https://doi.org/10.1007/s00146-021-01263-4
O’Mara, M. (2019). The code: Silicon Valley and the remaking of America. Penguin.
Ossewaarde, M., & Gulenc, E. (2020). National varieties of Artificial Intelligence discourses: Myth, utopianism, and solutionism in West European policy expectations. Computer, 53(11), 53–61. https://doi.org/10.1109/MC.2020.2992290
Owen, R., von Schomberg, R., & Macnaghten, P. (2021). An unfinished journey? Reflections on a decade of responsible research and innovation. Journal of Responsible Innovation, 8(2), 217–233. https://doi.org/10.1080/23299460.2021.1948789
Paltieli, G. (2021). The political imaginary of National AI strategies. AI & Society. Advance Online Publication. https://doi.org/10.1007/s00146-021-01258-1
Pasquale, F. (2015). The black box society. Harvard University Press.
Peters, B. G. (2017). What is so wicked about wicked problems? A conceptual analysis and a research program. Policy and Society, 36(3), 385–396. https://doi.org/10.1080/14494035.2017.1361633
Radu, R. (2021). Steering the governance of artificial intelligence: National strategies in perspective. Policy and Society, 40(2), 178–193. https://doi.org/10.1080/14494035.2021.1929728
Rein, M., & Schon, D. (1993). Reframing policy discourse. In F. Fischer & J. Forester (Eds.), The argumentative turn in policy analysis and planning (pp. 145–166). UCL Press.
Rein, M., & Schon, D. (1996). Frame-critical policy analysis and frame-reflective policy practice. Knowledge and Policy: The International Journal of Knowledge Transfer and Utilization, 9(1), 85–104. https://doi.org/10.1007/BF02832235
Rittel, H. W., & Webber, M. M. (1973). Dilemmas in a general theory of planning. Policy Sciences, 4(2), 155–169. https://doi.org/10.1007/BF01405730
Roberts, H., Cowls, J., Hine, E., Mazzi, F., Tsamados, A., Taddeo, M., & Floridi, L. (2021). Achieving a ‘Good AI Society’: Comparing the aims and progress of the EU and the US. Science and Engineering Ethics, 27(6), 1–25. https://doi.org/10.1007/s11948-021-00340-7
Rotolo, D., Hicks, D., & Martin, B. (2015). What is an emerging technology? Research Policy, 44(10), 1827–1843. https://doi.org/10.1016/j.respol.2015.06.006
Sætra, H. S. (2021). AI in context and the sustainable development goals: Factoring in the unsustainability of the sociotechnical system. Sustainability, 13(4), 1738. https://doi.org/10.3390/su13041738
Schatzberg, E. (2018). Technology: Critical history of a concept. The University of Chicago Press.
Schiff, D., Borenstein, J., Biddle, J., & Laas, K. (2021). AI ethics in the public, private, and NGO sectors: A review of a global document collection. IEEE Transactions on Technology and Society, 2(1), 31–42. https://doi.org/10.1109/TTS.2021.3052127
Schon, D., & Rein, M. (1994). Frame reflection: Toward the resolution of intractable policy controversies. Basic Books.
Schot, J., & Steinmueller, W. E. (2018). Three frames for innovation policy: R&D, systems of innovation and transformative change. Research Policy, 47(9), 1554–1567. https://doi.org/10.1016/j.respol.2018.08.011
Stilgoe, J. (2020). Who’s driving innovation? Palgrave Macmillan.
Stilgoe, J., Owen, R., & Macnaghten, P. (2013). Developing a framework for responsible innovation. Research Policy, 42(9), 1568–1580. https://doi.org/10.1016/j.respol.2013.05.008
Taeihagh, A. (2021). Governance of artificial intelligence. Policy and Society, 40(2), 137–157. https://doi.org/10.1080/14494035.2021.1928377
Taeihagh, A., Ramesh, M., & Howlett, M. (2021). Assessing the regulatory challenges of emerging disruptive technologies. Regulation & Governance, 15(4), 1009–1019. https://doi.org/10.1111/rego.12392
Tan, S. Y., & Taeihagh, A. (2021). Governing the adoption of robotics and autonomous systems in long-term care in Singapore. Policy and Society, 40(2), 211–231. https://doi.org/10.1080/14494035.2020.1782627
Ulnicane, I. (2016). ‘Grand challenges’ concept: A return of the ‘Big ideas’ in science, technology and innovation policy? International Journal of Foresight and Innovation Policy, 11(1–3), 5–21. https://doi.org/10.1504/IJFIP.2016.078378
Ulnicane, I. (2022). Against the new space race: Global AI competition and cooperation for people. AI & Society. https://doi.org/10.1007/s00146-022-01423-0 Advance online publication.
Ulnicane, I., Knight, W., Leach, T., Stahl, B. C., & Wanjiku, W.-G. (2021a). Framing governance for a contested emerging technology: Insights from AI policy. Policy and Society, 40(2), 158–177. https://doi.org/10.1080/14494035.2020.1855800
Ulnicane, I., Eke, D. O., Knight, W., Ogoh, G., & Stahl, B. C. (2021b). Good governance as a response to discontents? Déjà vu, or lessons for AI from other emerging technologies. Interdisciplinary Science Reviews, 46(1–2), 71–93. https://doi.org/10.1080/03080188.2020.1840220
Ulnicane, I., Knight, W., Leach, T., Stahl, B. C., & Wanjiku, W.-G. (2022). Governance of Artificial Intelligence: Emerging international trends and policy frames. In M. Tinnirello (Ed.), The global politics of Artificial Intelligence (pp. 29–55). CRC Press.
Van Lente, H., Spitters, C., & Peine, A. (2013). Comparing technological hype cycles: Towards a theory. Technological Forecasting and Social Change, 80(8), 1615–1628. https://doi.org/10.1016/j.techfore.2012.12.004
van Wynsberghe, A. (2021). Sustainable AI: AI for sustainability and the sustainability of AI. AI and Ethics, 1(3), 213–218. https://doi.org/10.1007/s43681-021-00043-6
Vinuesa, R., Azizpour, H., Leite, I., Balaam, M., Dignum, V., Domisch, S., Felländer, A., Langhans, S. D., Tegmark, M., & Fuso Nerini, F. (2020). The role of artificial intelligence in achieving the sustainable development goals. Nature Communications, 11(1), 1–10. https://doi.org/10.1038/s41467-019-14108-y
Wanzenbock, I., Wesseling, J., Frenken, K., Hekkert, M., & Weber, M. (2020). A framework for mission-oriented innovation policy: Alternative pathways through the problem-solution space. Science and Public Policy, 47(4), 474–489. https://doi.org/10.1093/scipol/scaa027
Winner, L. (2020). The Whale and the Reactor. A search for limits in an age of high technology (2nd ed.). The University of Chicago Press.
Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. Profile books.
Acknowledgements
This study has benefitted from the feedback on earlier versions presented virtually in June 2022 at the workshop for the special issue Global Governance of Emerging Technologies (organized by Fudan University, China) and at the Global Transformations and Governance Challenges conference (organized by Leiden University, the Netherlands). This research has received funding from the European Union’s Horizon 2020 Research and Innovation Programme under the Grant Agreement No.945539 (HBP SGA3).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The author has no competing interests to declare that are relevant to the content of this article.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Annex 1
Annex 1
Dataset of AI policy documents analysed (in alphabetical order).
-
1.
Accenture (2017) Embracing artificial intelligence. Enabling strong and inclusive AI driven growth.
-
2.
Big Innovation Centre/All-Party Parliamentary Group on Artificial Intelligence (2017a) APPG AI Findings 2017.
-
3.
Big Innovation Centre/All-Party Parliamentary Group on Artificial Intelligence (2017b) Governance, Social and Organisational Perspective for AI. 11 September 2017.
-
4.
Big Innovation Centre/All-Party Parliamentary Group on Artificial Intelligence (2017c) Inequality, Education, Skills, and Jobs. 16 October 2017.
-
5.
Big Innovation Centre/All-Party Parliamentary Group on Artificial Intelligence (2017d) International Perspective and Exemplars. 30 October 2017.
-
6.
Big Innovation Centre/All-Party Parliamentary Group on Artificial Intelligence (2017e) What is AI? A theme report based on the 1st meeting of the All-Party Parliamentary Group on Artificial Intelligence. 20 March 2017.
-
7.
Bowser, A., M. Sloan, P. Michelucci and E. Pauwels (2017) Artificial Intelligence: A Policy-Oriented Introduction. Wilson Briefs. Wilson Center.
-
8.
Campolo, A, M.Sanfilippo, M.Whittaker and K.Crawford (2017) AI Now 2017 Report. AI Now Institute, New York University.
-
9.
CNIL (2017) Algorithms and artificial intelligence: CNIL's report on the ethical issues.
-
10.
Crawford, K. and M. Whittaker (2016) The AI Now Report. The Social and Economic Implications of Artificial Intelligence Technologies in the Near-Term. AI Now Institute.
-
11.
EDPS (2016) Artificial Intelligence, Robotics, Privacy and Data Protection. Room document for the 38th International Conference of Data Protection and Privacy Commissioners.
-
12.
European Commission (2017) AI Policy Seminar: Towards and EU strategic plan for AI. Digital Transformation Monitor.
-
13.
European Commission (2018a) Artificial Intelligence: A European Perspective.
-
14.
European Commission (2018b) Artificial Intelligence for Europe. Communication.
-
15.
European Commission (2018c) Coordinated Plan on Artificial Intelligence. Communication.
-
16.
European Economic and Social Committee (2017) Artificial Intelligence—The consequences of Artificial intelligence on the (digital) single market, production, consumption, employment and society. Opinion.
-
17.
European Group on Ethics in Science and New Technologies (2018) Statement on Artificial Intelligence, Robotics and ‘Autonomous’ Systems.
-
18.
European Parliament (2016) European Civil Law Rules in Robotics. Study for the JURI Committee.
-
19.
European Parliament (2017) Report with recommendations to the Commission on Civil Law Rules on Robotics.
-
20.
European Parliament (2018) Understanding Artificial Intelligence. Briefing EPRS.
-
21.
Executive Office of the President (2016a) Artificial Intelligence, Automation, and Economy, Report.
-
22.
Executive Office of the President (2016b) Preparing for the future of artificial intelligence. National Science and Technology Council Committee on Technology.
-
23.
Executive Office of the President (2016c) The National Artificial Intelligence research and development Strategic Plan. National Science and Technology Council. Networking and Information Technology Research and Development Subcommittee.
-
24.
Future of Humanity Institute et al. (2018) The Malicious Use of Artificial Intelligence: Forecasting, Prevention and Mitigation.
-
25.
Government Office for Science (2016) Artificial Intelligence: opportunities and implications for the future of decision making.
-
26.
Hall, W. and J. Pesenti (2017) Growing the Artificial Intelligence Industry in the UK.
-
27.
HM Government (2018) Artificial Intelligence Sector Deal. 26 April 2018.
-
28.
House of Commons Science and Technology Committee (2016) Robotics and artificial intelligence. Fifth report of session 2016–17.
-
29.
House of Lords (2018) AI in the UK: ready, willing and able?
-
30.
IEEE (2017) Ethically aligned design. A vision for prioritizing human well-being with autonomous and intelligent systems. Version 2 – for public discussion.
-
31.
IEEE European Public Policy Initiative (2017) Artificial Intelligence: Calling on Policy Makers to Take a Leading Role in Setting a Long-Term AI Strategy. Position Statement.
-
32.
IEEE-USA (2017) Artificial Intelligence Research, Development & Regulation. Position Statement.
-
33.
Information Commissioner’s Office (2017) Big data, artificial intelligence, machine learning and data protection. Data Protection Act and General Data Protection Regulation.
-
34.
International Telecommunication Union (2017) AI for Good Global Summit Report 2017, Geneva, 7–9 June 2017.
-
35.
IPPR (2017) Managing automation: Employment, inequality and ethics in the digital age. Discussion Paper.
-
36.
Ministry of Economic Affairs and Employment (2017) Finland’s Age of Artificial Intelligence.
-
37.
Ponce Del Castillo, A. (2017) A Law on Robotics and Artificial Intelligence in the EU? Foresight Brief. European Trade Union Institute ETUI.
-
38.
Rathenau Institute (2017) Human Rights in the Robot Age. Challenges arising from the use of robotics, artificial intelligence, and virtual and augmented reality. Report for the Parliamentary Assembly of the Council of Europe.
-
39.
SGPAC (2017) Governance, Risk & Control: Artificial Intelligence. Effective Deployment, Management and Oversight of Artificial Intelligence (AI). Version 1.0. 22 March 2017. SGPAC Consulting & Advisory.
-
40.
Tata Leading the Way with Artificial Intelligence: The Next Big Opportunity for Europe. TCS Global Trend Study—Europe. Tata Consultancy Services.
-
41.
The 2015 panel (2016) Artificial Intelligence and life in 2030. One hundred year study on artificial intelligence. Report of the 2015 study panel.
-
42.
The Federal Government (2018) Artificial Intelligence Strategy. November 2018
-
43.
The Royal Society (2017) Machine learning: the power and promise of computers that learn by example.
-
44.
Thierer, A., A. Castillo O’Sullivan, and R. Russell (2017) Artificial Intelligence and Public Policy. Report. Mercatus Center, George Mason University.
-
45.
UNI Global Union (2017) Top 10 Principles for ethical artificial intelligence. The future world of work.
-
46.
Villani, C. (2018) For a meaningful artificial intelligence. Towards a French and European Strategy.
-
47.
Vinnova (2018) Artificial Intelligence in Swedish business and society.
-
48.
Whittaker, M., K. Crawford, R. Dobbe, G. Fried, E. Kaziunas, V. Mathur, S. Myers West, R. Richardson, J. Schultz, O. Schwartz (2018) AI Now Report 2018.
-
49.
World Economic Forum (2018) Artificial Intelligence for the Common Good. Sustainable, Inclusive and Trustworthy. White Paper for attendees of the WEF 2018 Annual Meeting.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Ulnicane, I. Emerging technology for economic competitiveness or societal challenges? Framing purpose in Artificial Intelligence policy. GPPG 2, 326–345 (2022). https://doi.org/10.1007/s43508-022-00049-8
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s43508-022-00049-8