1 Introduction

Artificial Intelligence (AI) governance has become a top priority for policymakers worldwide (Gasser & Almeida, 2017; Marchant et al., 2020). As countries create their national strategies and systems for AI governance, there is a growing realization that the transnational nature of the challenges posed by AI requires governments and several actors to work together across countries. Hence, the international space has been active with various cross-border collaborations on AI governance. Consequently, different studies in the AI governance literature have investigated and proposed suitable global AI governance mechanisms—i.e., governance models, structures, tools, etc. for governing AI at a level beyond and across national governments. Some of these include a Group of 20 (G20) coordinating committee for the governance of artificial intelligence (CCGAI) (Jelinek et al., 2021); a new informal intergovernmental organization called the International Artificial Intelligence Organization (IAIO) (Erdélyi & Goldsmith, 2018); and multistakeholder institutions and standards (Johnson & Bowman, 2021).

However, the lack of cooperation between three key players—the United States (US), China, and the European Union (EU)—is one major obstacle to establishing a global AI governance mechanism. The three actors lead AI developments globally, have competing goals for AI governance, and differ in the values and principles that influence AI governance. Hence, they approach AI governance differently and are unable to agree on common goals, which pose risks for humanity. This is a collective action problem. It is important to factor this situation into any proposal for global AI governance due to the influence these three entities wield, or such a proposal may be doomed for failure. Therefore, this paper analyzes the challenges to establishing a global governance mechanism for AI, focusing on the AI governance activities of the US, China, and the EU, and proposes solutions from a collective action perspective.

Collective action—the action of a group working towards a common objective—has been a growing area of research since Mancur Olson, an American political economist, challenged the ability of groups to realize the interests of individuals (Sandler, 2015). Other researchers have then gone ahead to examine how collective action problems—situations that arise when members of a group decide to act in ways that put their individual interests above the group’s shared interests, resulting in unfavorable outcomes for all in the group—can be resolved and under what conditions (Gardner et al., 1990; Jagers et al., 2020; Ostrom, 2010; Sandler, 2015). Therefore, some of the findings from the collective action literature can provide useful insights for evaluating how actions like those of the US, China, and the EU can be coordinated in global AI governance endeavors.

In this conceptual paper, I review the literature on AI governance and collective action and apply insights from them to answer the question: How can a collective action perspective inform global AI governance efforts? Studies on global AI governance and collective action were selected for review based on their relevance and the distinct value each contributed toward a comprehensive overview of the AI governance landscape and how collective action applies.

Findings revealed that Ostrom’s design principles for common-pool resources (CPRs) and Jagers et al.’s analytical framework for large-scale collective action hold important considerations for designing effective institutions for global AI governance factoring in the non-cooperative approaches of the US, China, and the EU. A multilevel polycentric arrangement of AI governance mechanisms is more likely to succeed than a single centralized global governance mechanism. Enforcement of AI governance rules is critical for implementation success and should be operationalized by monitoring and sanctioning mechanisms, supported by conflict-resolution and information-provision mechanisms.

The rest of the paper proceeds as follows. Section 2 gives a synoptic view of the AI governance landscape. Section 3 focuses on the divergent approaches of the three key players—the US, China, and the EU. Section 4 identifies the barriers to cooperation on global AI governance resulting from their actions. Section 5 highlights the risks of their non-cooperation. Section 6 summarizes studies in the collective action literature relevant to global AI governance. Section 7 applies findings from these studies to propose solutions for global AI governance, factoring in the divergent approaches of the US, China, and the EU. Section 8 concludes with recommendations for global AI governance efforts and areas for further research.

2 Understanding the AI Governance Landscape

AI governance is generally defined as the “mechanisms and processes that shape and govern AI” (Butcher & Beridze, 2019, p. 89). This is achieved through a variety of means for influencing the development and application of AI, ranging from hard approaches like governance institutions, frameworks, regulations, and legislative instruments to softer approaches like ethical guidelines, societal norms, and industry standards and practices (Butcher & Beridze, 2019; ÓhÉigeartaigh et al., 2020).

The rapid development of AI has prompted various actors to explore how this transformative technology can be harnessed for their benefit, leading to diverse actors and levels of involvement. As a result, the AI governance literature often describes the landscape as “fragmented” (Schmitt, 2022), “underdeveloped” (Naidoo, 2021), “unorganized,” and “immature” (Butcher & Beridze, 2019). This lack of coordination and AI's transnational impact have driven international collaborations among several actors aiming to establish a global AI governance regime.

The AI governance literature identifies these groups of actors as belonging to three categories. Actors in the private sector include big multinational corporations that are heavily involved in the technical development and application of AI systems, e.g., Google, OpenAI, Microsoft, IBM, and Meta. This group of actors often canvass for a self-regulated or collective-industry regulation approach to AI governance that does not inhibit the development prospects of AI. They have been responsible for the development of industry guidelines and standards on AI, AI ethics documents, research and development programs on AI, governance frameworks, and strategies for AI (Butcher & Beridze, 2019; Schiff et al., 2020).

There is also the category of actors in the public sector. This includes national governments and their agencies whose activities have included publishing national AI strategies and policies defining their countries’ direction of AI development and international agreements between countries on AI applications. There are also intergovernmental-level partnerships on AI facilitated by bodies such as the EU, G20, Organisation for Economic Co-operation and Development (OECD), and the United Nations (UN) (Naidoo, 2021; Schiff et al., 2020).

Lastly, non-governmental organizations (NGOs) such as professional organizations (e.g., The Institute of Electrical and Electronics Engineers [IEEE], The Royal Society), think tanks, advocacy groups, civil societies, and research institutes have also been very active on the international scene by organizing international workshops, setting standards, calling attention to issues, and drawing up recommendations on AI development, deployment, and use (Schiff et al., 2020; Schmitt, 2022).

There is disagreement in the literature about the impact of each category of actors in AI governance. Radu (2021), reporting on an evaluation by AlgorithmWatch of documents on AI ethical principles and guidelines, noted that the “majority of binding agreements and voluntary commitments that exist are proposed by the private sector” (p. 181). In contrast, Schiff et al. (2020) observed that public sector actors like government agencies have been the most active, producing more than half of the documents in the sample reviewed by them (p. 154). Others like Cihon et al. (2020) and Schmitt (2022) have also emphasized the important role played by NGOs in serving as watchdogs on AI issues and publishing documents that serve as reference points for international organizations. But Schiff et al. (2020) criticized the level of influence that NGOs can exert given that they are often positioned as an outsider dependent on actors in the public and private sectors to implement their recommendations which are usually abstract and difficult to operationalize.

The role of international organizations has also been hotly debated. Schmitt (2022) noted that international organizations like the UN, OECD, European Commission, Partnership on AI (PAI), and standard-setting bodies like IEEE Standards Association, International Organization for Standardization (ISO) together with the International Electrotechnical Commission (IEC) have been exercising a high level of agency in bringing together an otherwise fragmented AI governance landscape within their existing governance architecture. He particularly observed that “the nascent AI regime that emerges is polycentric and fragmented but gravitates around the OECD, which holds considerable epistemic authority and norm-setting power” (p. 311). However, the limitations of international organizations in achieving the global governance of AI are also well documented. Some of these include the inability of international organizations to enforce the implementation of their AI rules by members; the different interpretations of such rules by members based on their diverse cultural contexts; the absence of the membership of influential actors; the interference of geopolitical interests of powerful members in the activities of international organizations; and the varying degrees of priority given to issues on the agenda of international organizations by each country as determined by their political will and availability of resources (Cihon et al., 2020; Johnson & Bowman, 2021; Schmitt, 2022).

Despite efforts by various actors, there is still no generally recognized governance mechanism for AI at a global level (Schmitt, 2022, p. 305). While several authors have proposed frameworks, there is no consensus on what an adequate governance structure should entail. There are different opinions on the scope of governance for such a mechanism, with authors like Butcher and Beridze (2019) arguing for a governance mechanism that focuses on narrow and specific AI-application areas like lethal autonomous weapons systems (LAWS), health care, and transportation, rather than a broader global framework for general AI applications. Arguments also exist on the type of governing instruments to adopt, ranging from hard instruments like binding regulations applied on a risk-based approach to soft instruments like voluntary standards, codes of conduct, norms, and ethical principles (Ala-Pietilä & Smuha, 2021; United Nations University, 2018).

The main debate in the literature centers on centralization versus decentralization in AI governance. Some authors advocate for a central mechanism led by a single authority, citing efficiency, reduced competition, and greater political power to effect changes (Cihon et al., 2020). However, centralization has also been observed to lead to inflexibility, limited scope, and low stakeholder participation. In contrast, a decentralized approach, with multiple governing bodies at different levels, is seen as offering greater agility, sensitivity to contextual issues, and improved stakeholder inclusion (Jelinek et al., 2021). The debate persists, with newer proposals reflecting both approaches.

As the search for a global AI governance mechanism continues, authors have proposed various alternatives. Jelinek et al. (2021) proposed an international organization, CCGAI, as a multilateral mechanism. Gill and Germann (2022) suggested a globally coordinated digital commons approach to AI governance that integrates a normative approach and the distribution of governance capabilities across different layers from the global to the local. Feijóo et al. (2020) advocated for a broad-ranging collaborative and dialogic outlook called ‘technology diplomacy’ between nations that leverages existing communication channels and may also require new ones. Erdélyi and Goldsmith (2018) proposed the establishment of a new informal intergovernmental organization, IAIO, to serve as a multistakeholder forum and standard-setting body. In like fashion, United Nations University (2018) put forward an Intergovernmental Panel on AI (IPAI) modeled on the Intergovernmental Panel on Climate Change (IPCC) which is a large multistakeholder platform for climate change. Johnson and Bowman (2021) also, after surveying different alternatives, concluded that multistakeholder bodies as institutions and transnational standards as instruments would be the most effective governance approach for the global governance of AI, both functionally and politically.

These proposals have some merit, but almost all of them are prescriptive in their approach without taking into consideration a major obstacle to achieving global coordination on AI governance. The divergent approaches of three key players—the US, China, and the EU—stand in the way of global AI governance. Any global AI governance proposal that does not factor this in is likely to fail upon implementation. Therefore, in the following section, I focus on the divergent AI governance approaches of the US, China, and the EU.

3 Understanding the divergent approaches of the US, China, and the EU

The AI governance literature identifies the US and China as the two countries leading AI developments globally (Bard & Armstrong, 2019; Cheng & Zeng, 2022; Daly et al., 2019; Feijóo et al., 2020; Stix, 2021). However, the EU has also emerged as a third force to be reckoned with (Feijóo et al., 2020; Minkkinen et al., 2021; Stix, 2021). These three key players are driven by divergent, sometimes conflicting, agendas and motivations for global leadership in AI governance. Below, I provide a high-level summary of the approaches of each actor to AI governance.

3.1 US

The US approach to AI governance reflects a drive to maintain its competitive advantage as a global technological and economic superpower. Since the first Trump administration (which championed an unhidden devotion to ‘America first’), the US has published the most AI strategies and policy reports of any country (Ding, 2018). One of such, the American Artificial Intelligence Initiative, prioritized US leadership of AI developments in five areas, namely investment in AI research and development; AI resources; AI governance standards; AI workforce; and maintaining America’s advantage in AI through international collaborations (Parker, 2019). This vision has seen the US rank highly among the countries with the most technical competencies and commercial innovations in AI (Fu, 2021). In line with such an agenda, the US has adopted a market-driven and less restrictive approach to AI governance (Chun et al., 2024), prioritizing sector-specific governance, giving autonomy to several federal agencies, and allowing industry actors to influence the direction of governance through legislative consultations, self-regulation, and voluntary commitments (Luna et al., 2024; Mokry & Gurol, 2024).

At the time of writing, there is no federal law on AI; the most prominent policy action at the federal level is the Executive Order on Safe, Secure, and Trustworthy Development and Use of AI issued by the Biden Administration (Luna et al., 2024). However, a decentralized approach to the sub-national level has emerged with different state legislatures and municipalities passing sector-specific regulations on AI application areas such as elections, child pornography, government use, education, healthcare, etc. (Artificial Intelligence, 2024). Nonetheless, Parinandi et al. (2024) observed the heterogeneity in the actions of states, finding that economic concerns such as the level of unemployment and inflation, as well as the dominant party and political orientation of individual legislators, determine whether and which type of AI policies a state adopts. Despite the highly active sub-national scene on AI governance, there have been calls from different quarters for a more centralized and restrictive policy regime, with some actors advocating for a newly created AI regulatory structure instead of relying on existing mechanisms (Chun et al., 2024; Safe and Secure Innovation for Frontier AI Models Act, S.B. 1047, 2024; “Sen. Chuck Schumer,” 2023). These dynamics highlight the formative stage of the institutionalization of AI governance in the US.

3.2 China

China’s approach to AI governance complements or rather is an extension of its authoritative political system and centrally-planned state. The Chinese Communist Party (CCP) has proactively embraced AI as a strategic means of governing digitally (Qiao-Franco & Bode, 2023; Zeng, 2020) and as a key element in its economic and social infrastructure (Wu et al., 2024). In line with this, the CCP strives to keep AI developments in alignment with its national and core socialist values through a rule-based approach (Luna et al., 2024), ensuring that the private sector remains under state control (Wu et al., 2024). Chun et al. (2024), however, noted that despite the People’s Republic of China (PRC) having a seemingly centralized approach to AI governance, there is a great deal of regional competition and decentralized innovation going on at the local levels to foster economic development. The central government, while using general guidelines to ensure top-down control, allows for selective application and lax enforcement of regulations to small and medium-sized enterprises (SMEs) and upcoming tech startups seen as engines of innovation and economic growth. Experts from academia, think tanks, and startups are also involved in formulating, clarifying, and interpreting Chinese AI regulations, while local officials mainly ensure alignment with the state's ideological positions (Sheehan, 2024; Zhang, 2022).

In addition to leveraging AI to realize its local governance and economic development goals, the PRC has also identified AI as a ‘strategic technology’ to realize its global ambition of achieving geopolitical dominance and rivaling the US in its position as the foremost global economic and political powerhouse (Mokry & Gurol, 2024; Olugbade, 2024). In its New Generation Artificial Intelligence Development Plan released in 2017, China categorically expressed its ambition to become the world leader in AI by 2030 by transforming its AI industry into a trillion-yuan one and leading the pack in setting norms and ethical standards for AI use (Roberts et al., 2021). Hence, AI is governed as the new frontier for great power competition (GPC) with the US.

3.3 EU

Spotting a gap in the AI narratives created by the two major players, the EU has sought to differentiate itself by advancing a narrative of its own. The EU has strategically woven the discourse of ‘trustworthy AI’ into the global conversation on AI governance (Stix, 2021). By building on the success and international reach of the General Data Protection Regulation (GDPR), the EU’s focus on human-centered AI that prioritizes ethical concerns, data privacy, and fundamental human rights has positioned it as a third force in influencing the space for international governance of AI (Minkkinen et al., 2021). The EU’s policy efforts have drawn on four themes namely, trust; complementarity of ethics and competitiveness; European value-based approach; and Europe’s global leadership in Responsible AI, to support its cause as a global leader in AI governance (Minkkinen et al., 2021, p. 221).

More recently, the EU AI Act was released as the “world’s first comprehensive AI law” (“EU AI Act,” 2023). The AI Act takes a risk-based approach to regulating AI systems, classifying them into 4 risk categories, namely unacceptable risk, high risk, limited risk, and minimal risk, with each category subject to varying levels of regulatory requirements. The EU hopes the AI Act will have the same ‘Brussel’s Effect’ (Bradford, 2020) that the GDPR had in engendering compliance by actors outside the EU. Thus, AI governance is another means of exporting European values (Tyrangiel, 2024) and furthering the EU’s economic and foreign policy interests at the international level (Chun et al., 2024).

The above summary highlights these three actors’ divergent approaches to AI governance as well as their varying stages of developing and implementing regulatory frameworks for AI systems (Dixon, 2022). The US AI regulatory landscape is still forming, characterized by growing debates on which direction to go, while the EU and China seem to have attained relative stability in their respective regulatory direction (Chun et al., 2024). These differences stand in the way of collaboration and have implications for global AI governance. Below, I explore the barriers to cooperation stemming from the differences.

4 Barriers to cooperation

Russia’s President, Vladimir Putin, is often cited in AI governance discussions for his view on AI’s global impact, famously stating, “Artificial intelligence is the future, not only for Russia but for all humankind. It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.” (Aldana, 2017). This kind of reasoning, regardless of its validity, seems to be held by decision-makers on AI governance in the US, China, and the EU. These three are not content to govern AI developments in their respective jurisdictions only but actively jostle for global leadership in AI governance.

Despite all three actors recognizing the need for some common standards, rules, and norms for AI governance at an international level, they are unable to cooperate because they each want to assume global leadership and vary in their proposals for how global regulation and international cooperation should happen (Mokry & Gurol, 2024). The EU through the AI Act has sought to strategically establish itself globally as the leading rule maker on AI (Krasodomski & Buchser, 2024). However, some of the EU’s positions on AI governance are perceived as innovation-stifling (Suominen, 2020) and less favorable by the US which prefers self-regulated industries and voluntary codes of conduct (Feijóo et al., 2020). They also do not play well into the current geopolitical dynamics between the US and China characterized by increased decoupling of global technology supply chains, ongoing chip war, and growing trade tensions (Chun et al., 2024; Mokry & Gurol, 2024). As the US seeks to maintain its techno-economic leadership and China pushes to assert its technological independence, simultaneously eyeing geopolitical power, they both have used tariffs, export controls, and sanctions to control strategic technologies such as advanced semiconductors, batteries, and electric vehicles. Hence, faster innovation has become ever more instrumental for their national security and economic competitiveness, in a geopolitical landscape where the EU also pursues open strategic autonomy. Hence, the centrality of AI to the ambitions of all three entities has made AI governance the new frontier for navigating global competition, making it hard to cooperate.

The lack of cooperation due to geopolitical ambitions is further influenced by the divide along ideological lines and modes of international cooperation. The EU and the US are more sympathetic to collaborating with allies and democratic partners on AI governance as their approaches advocate democratic values and alignment with human rights principles (Mokry & Gurol, 2024). In contrast, China’s divergent social values and authoritarian government-citizen relationship have positioned it against most Western countries. For instance, China’s state-led approach to AI development has seen it use AI technologies for mass surveillance and social scoring (Feijóo et al., 2020). Such acts that undermine democratic values, individual privacy, and fundamental rights have stood in the way of China’s endeavors to be a pacesetter in setting global standards for AI ethics and governance (Cheng & Zeng, 2022). Hence, as the EU and the US deepen their transatlantic and democratic partnerships on AI governance,—for instance, through the recent convening of the International Network of AI Safety Institutes—China has resorted to seeking out new partners in authoritative regimes like Russia and developing countries through its Belt and Road Initiative (Mokry & Gurol, 2024). For instance, China has led the formation of an AI Study Group among BRICS (an intergovernmental grouping of Brazil, Russia, India, China, South Africa, and other member countries); emphasizes equality in AI development and use across countries; and advocates for the increased representation of developing countries in global AI governance, hoping to woo them to its side. Aside from their ideological differences, the modes of international cooperation also vary among the three entities. While the US and the EU leverage existing regional and international organizations, China prefers creating new alternative institutions (Mokry & Gurol, 2024). These dynamics may hasten bloc formation, worsening an already fragmented global AI governance landscape.

The competitive approach of these entities to AI governance has made it challenging to arrive at a commonly agreed global AI governance mechanism, at least one to which all three actors subscribe (Naidoo, 2021; ÓhÉigeartaigh et al., 2020). However, it is not all doom and gloom. Mokry and Gurol (2024) observed that the US, China, and the EU share a similar perspective on applying AI to solve global challenges and working with international organizations (IOs). China and the EU emphasize addressing sustainability issues with AI technologies while the US is less precise. Also, China has shown a willingness to work with the UN—and has been more vocal about contributing to formulating international standards; while the EU and the US support the OECD’s efforts and other IOs—although both always aim to lead the standard-setting processes. These glimmers indicate some pathways for coordinating global AI governance but must be carefully cultivated.

5 The risks of non-cooperation

The risks facilitated by AI systems are well documented in the AI governance literature. Scholars have explored the wide-ranging impacts of AI systems on privacy (Dhirani et al., 2023) bias and discrimination (Corrêa & Fernandes de Oliveira, 2021); economic and environmental health (Garvey, 2018); democratic processes and political institutions (Erman & Furendal, 2022); inequalities between Global North and Global South countries (Gehl Sampath, 2021; Sinanan & McNamara, 2021); spirituality and psycho-physiological wellbeing (Garvey, 2018); military applications and global security (Kolade, 2024; Sepasspour, 2023); large-scale social systems (Gruetzemacher et al., 2023) and a host of other emerging issues.

However, some risks are particularly amplified or generated by the non-cooperation of the three key actors. As non-cooperative actions between the US, EU, and China grow, there is a possibility of a heightened nationalist approach to AI developments, with each prioritizing national interests over global concerns. This may lead to more geopolitical fragmentation, diversion of attention from global issues like sustainability to short-term national economic interests, and protectionist policies that embrace mercantilism, disrupt global markets, and worsen inequalities among nations (Gerlich, 2024). Sepasspour (2023) noted that the US-China AI governance rivalry could spill over into other international conflicts as more countries are encouraged to pursue AI developments solely for their own advantage in an ultra-competitive landscape. This would lead to power concentration among leading nations as AI benefits are hoarded by a few even though the costs could be widely dispersed.

Another impact would be the weakening of an increasingly hollowed-out multilateral system (Hale et al., 2013). The lack of coordination between the US, China, and the EU on AI governance poses a barrier to a common global approach and could foster a multipolar world with different countries lining up behind each key actor in diverse ‘minilateral’ groupings and regional arrangements (Haqqani & Janardhan, 2023; Kahler, 1992; Sepasspour, 2023). This would stand in the way of multilateral deliberations and agreements on global AI governance and enable ‘forum shopping’ as countries strategically align with the least restrictive regime for realizing their interests (Hofmann, 2019).

The lack of cooperation between the three actors also impacts their ability to address cross-border risks from advanced AI systems. Kolade (2024) observed that joint research initiatives could help countries mitigate AI-driven cyberattacks. Without coordination, it becomes difficult to manage risks from open-source AI models and nefarious applications with individuals acting across national borders (Government Office for Science, 2023). Wider societal-scale risks from frontier AI—highly capable general-purpose AI models—could also be mitigated through coordinated international efforts. Gruetzemacher et al. (2023) proposed that an international consortium would help to address such risks by bringing diversity to frontier AI risk evaluation and plugging gaps resulting from siloed efforts; ensuring representation and pooling together expertise and regulatory resources; allocating efforts optimally and reducing duplication; and sharing knowledge and information for policymaking and standardization. All these could be challenging without cooperation between the leading three actors with the most capabilities to achieve them.

Finally, a lack of cooperation could lead to a race-to-the-bottom dynamic where each actor prioritizes speed over safety, increasingly taking risks and cutting corners to be the first to achieve advancements in AI systems (Bengio et al., 2024; Cave & ÓhÉigeartaigh, 2018). Coordination would help develop AI safety standards and agree on red lines that should not be crossed in AI developments (International Dialogues on AI Safety, 2023). International coordination among these three actors with the most advanced AI capabilities is essential to avoid an arms race and address military and dangerous AI applications such as LAWS, bioweapons, and other capability issues such as value alignment and control problems (Bengio et al., 2024). A competitive race among the US, China, and the EU to achieve Artificial General Intelligence (AGI) could expose humanity to existential and global catastrophic risks (Bucknall & Dori-Hacohen, 2022; Government Office for Science, 2023). Such long-term risks must be collectively managed while also addressing immediate concerns.

All these risks make an urgent case for global cooperation in the governance of AI developments, especially the need for collective action by the US, China, and the EU.

6 Collective action

Mancur Olson’s book The Logic of Collective Action (1965) redefined the general understanding of public choice when it was published (Sandler, 2015). According to Sandler, Olson’s questioning of the ability of groups to advance the collective interests of members spurred further research into group behavior by scholars in the field of public choice. Others have since advanced Olson’s work in this area. One influential scholar is Elinor Ostrom whose work conceptualized structural variables for fostering collective action in a group (Ostrom, 2010).

Collective action situations require two or more people to act in concert to achieve a shared goal. However, as Olson emphasized, this could be challenging. Therefore, collective action problems arise when group members decide to pursue their short-term individual interests at the expense of the long-term shared interests of the group, yielding suboptimal benefits and leaving everyone worse off than if they all had acted cooperatively. The occurrence of collective action is dependent on the nature of goods at stake for the group. Goods can be classified using the two features of excludability and rivalry (Ostrom & Ostrom, 1978). Excludability of goods refers to the possibility of limiting the enjoyment of the goods to some while denying others the same. Rivalry of goods refers to whether the goods are reduced for others as they are being used by another. Where the goods are excludable and rivalrous, these are known as private goods, e.g., clothes. Where the goods are excludable but not rivalrous, these are known as club goods, e.g., cable television. Where the goods are not excludable but are rivalrous, these are known as common goods, e.g., open-access fish ponds. Where the goods are neither excludable nor rivalrous, these are known as public goods, e.g., public parks. According to public choice theory, common goods and public goods are the main candidates for collective action problems because of their propensity for accommodating divergent interests (de Neufville & Baum, 2021).

Elinor Ostrom took a special interest in common-pool resources (CPRs)—common goods often existing in natural settings e.g., water resources, forests, and fisheries—and studied different cases of how collective action problems were solved. Synthesizing the fundamental similarities between the robust institutions constituting successful cases, she developed seven design principles and an eighth principle for larger, more complex cases. Ostrom defined a design principle as “an essential element or condition that helps to account for the success of these institutions in sustaining the CPRs and gaining the compliance of generation after generation of appropriators to the rules in use” (1990, p. 90). Due to space constraints, I will briefly list the design principles in Table 1 below; see Ostrom (1990) for a deeper explanation of each principle.

Table 1 Design Principles from long-enduring CPR institutions, adapted from Ostrom (1990)

Jagers et al. (2020) expanded the concept of collective action to large-scale situations such as climate change, biodiversity loss, and nuclear proliferation. They defined factors that promote collective actions as facilitators, e.g., trust, reciprocity, reputational stake, punishment, and willingness to accept the cost of punishing free riders. Facilitators are often interconnected and mutually reinforcing. They also identified the defining characteristics of large-scale collective action situations, such as group size, spatial distance, temporal distance, and complexity. They asserted that the interaction of these characteristics leads to factors that inhibit collective action, called stressors. Some of these include anonymity, lack of accountability, heterogeneity, risk, and uncertainty. Considering these two factors, facilitators and stressors, Jagers et al. developed three premises of large-scale collective actions listed in Table 2.

Table 2 Premises for large-scale collective action as defined by Jagers et al. (2020)

Based on these premises, they developed an analytical framework where third-party interventions with enough capacity and legitimacy are introduced into the situation to solve the large-scale collective action problem. However, they recognized that the creation and maintenance of such third-party interventions can constitute a second-order collective action.

These insights from the collective action literature are relevant to the AI governance situation between the US, China, and EU and can inform global AI governance efforts as will be shown below.

7 Global AI governance and collective action

The non-cooperation of the US, China, and the EU constitutes a collective action problem. While studies like de Neufville and Baum (2021) and Grace (2016) have examined AI developments from a collective action perspective, no study, to the best of my knowledge, has examined AI governance as influenced by these three actors from a collective action perspective. I frame this situation as a collective action problem because cooperation between the three in global AI governance is required to yield an optimal solution for humanity, by advancing AI developments along beneficial paths while avoiding the risks from their non-cooperation. With these actors pursuing conflicting agendas, the likelihood of collaboration reduces, leading to undesirable outcomes for all. There is, therefore, a need for some shared AI governance goals to be realized through the collective action that global AI governance fosters.

Focusing on the dynamics between the US, China, and the EU, global AI governance as currently constituted can be conceptualized as a common good. Given there are no formal restrictions and influencing the governance of AI developments at a global level is open to any of the three actors, these features qualify global AI governance as non-excludable. However, once one of the actors becomes dominant in governing AI developments globally, the other two are necessarily precluded from enjoying the same dominant influence. This, therefore, makes global AI governance rivalrous. On this basis, global AI governance can be classified as a common good. Hence, insights from CPR governance are useful in proffering solutions to the collective action problem of global AI governance.

Jagers et al.’s (2020) analytical framework for large-scale collective action can also contribute to defining suitable interventions for the lack of cooperation between the US, China, and the EU on global AI governance. Although I focus on the three leading actors, the numerous actors involved in international AI governance activities across the globe and the complexity of issues qualify the situation as a large-scale collective action problem. This, in conjunction with the current state of things between the US, China, and the EU on AI governance, leads to the conclusion of the third premise in Table 1, i.e., ‘third-party’ interventions would have to be generated to produce collective action. Insights from the framework could inform how such interventions could be designed. Table 3 below shows the stressors and facilitators that third-party interventions would have to prioritize for global AI governance. These are not exhaustive but are based only on information from Sect. 4.

Table 3 Stressors and facilitators of the collective action problem between the US, China, and the EU in global AI governance

While the exact forms that third-party interventions should take are not specified in this paper, Ostrom’s design principles can inform the considerations. In applying the principles to design effective institutions for global AI governance, the stressors must be targeted and the facilitators cultivated. I expound on the application of each design principle (DP) below.


DP #1: Clearly defined boundaries. Global AI governance is a common good; hence, it is non-excludable. Therefore, boundary definition should be less about keeping out participants and more about identifying who has a right to do what. For instance, multistakeholder governance mechanisms should be encouraged to ensure the representation and participation of various AI governance stakeholders from nations, industry, academia, civil society, the public sector, citizens, etc. However, considering stressor #1 ‘competing leadership ambitions’ of the EU, China, and the EU, these three could be incentivized to cooperate and contribute to the governance process by granting them special privileges for influence in recognition of their leadership in AI developments. Their participation could be further motivated by leveraging facilitator #1 ‘common perspective on AI for solving global challenges.’ This could mean exploring issues that all three agree on as an entry point around which global AI governance can be organized.


DP #2: Congruence between appropriation and provision rules and local conditions. In establishing rules for governing AI globally, the local contexts of AI governance stakeholders and the levels at which governance issues manifest must be considered. Centrally-determined rules (policies, regulations, agreements, principles, etc.) expected to be applied universally are less likely to be effective (Ostrom, 2009a, 2009b). Stressor #4 ‘ideological differences’ between the US, China, and the EU particularly stand in the way of the successful implementation of such rules. Hence, a global governance mechanism must be complemented by regional and local governance mechanisms that account for variations in culture, values, political systems, economic development, priorities, and other factors relevant to AI governance. Further, the burden of continued maintenance of the governance mechanisms should be distributed according to the level of influence on AI governance enjoyed by each stakeholder.


DP #3: Collective-choice arrangements. Even if certain AI governance rules are decided at a global level, provisions should be made for stakeholders at different levels and in different contexts to be able to collectively modify them and also set new rules. This facilitates the evolution of institutions for governance—an essential success factor for governing commons (Dietz et al., 2003)—to cope with the changing technological landscape accentuated by rapid AI developments and to enhance sensitivity to the settings in which the rules are operationalized. This is particularly important considering stressor #2 ‘conflicting approaches to AI governance’ between the US, China, and the EU, which would render these actors’ implementation of rigid centrally-determined rules infeasible.


DP #4: Monitoring and DP #5: Graduated sanctions. These two DPs are intertwined and should be considered together. The lack of enforcement has been the weakness of many AI governance rules proposed by different authorities. To facilitate enforcement, AI governance mechanisms should incorporate compliance monitoring and sanctions for non-compliance. Monitoring and sanctioning, however, are not desirable activities for many stakeholders, firstly because of their costs, and secondly, due to the ‘free rider problem’—certain stakeholders will not contribute to these activities since everyone benefits as long as some take responsibility. This can constitute a second-order collective action problem. To resolve this, stakeholders’ interests should be leveraged as incentives and nudges. For instance, the US and China would gladly monitor themselves since they are both interested in knowing what the other is doing, although stressor #3 ‘geopolitical tensions’ may pose some challenges that would have to be navigated. Mutual monitoring can be further motivated by agreeing on rewards for effective monitoring, but monitors should be assessed to prevent abuse. Monitors should also be accountable to the collective to prevent gaming the system.

Sanctions should be commensurate with the circumstances of non-compliance to avoid resistance and evasion (Dietz et al., 2003). First-time violators of AI governance rules may be penalized leniently, and sanctions adjusted progressively based on compliance history. Any mechanism adopted for monitoring and sanctioning should be collectively agreed upon as capable and legitimate.

Information obtained from monitoring and sanctioning activities should be disseminated to all stakeholders to foster confidence in the system, publicly track compliance, and assure those complying that they are not suckers. Information provision to stakeholders is an essential component of effective governance mechanisms because it informs learning for the evolution of rules, reduction of uncertainty and risk, and assessment of reputation, trust, and reciprocity—all of which foster cooperation (Dietz et al., 2003; Jagers et al., 2020; Poteete et al., 2010) Therefore, considerable resources should be devoted to infrastructure for providing trustworthy, empirical, scientific, representative, and context-relevant information on AI governance to stakeholders. Facilitator #2 ‘willingness to work with international organizations’ common to the US, China, and the EU make entities like the OECD and the newly proposed International Scientific Panel on AI by the UN (United Nations AI Advisory Board, 2024) potential instruments for information provision and other opportunities for their contribution to collective action should be cultivated.


DP #6: Conflict-resolution mechanisms. Given all the stressors, particularly stressor #4 ‘ideological differences’ and #5 ‘divergent preferences for international cooperation’ between the US, China, and the EU, conflict is inevitable when operationalizing the governance mechanisms in a collective action approach. However, conflicts can be productive for learning and change (Ostrom, 1993; Stern, 1991). Hence, the mechanisms adopted for AI governance should include easily accessible mechanisms for deliberating and resolving conflicts and infractions. These should be supported by inclusive avenues for dialogue and analytic deliberation (Dietz et al., 2003) informed by the information-providing infrastructure to build trust and sustain consensus while allowing for adaptation (Dietz et al., 2003; Ostrom, 2009a, 2009b). Influential stakeholders among the collective may lead conflict resolution processes.


DP #7: Minimal recognition of rights to organize. To ensure AI governance mechanisms designed through collective action are effective, they must be supported by relevant jurisdictions. Otherwise, they risk being undermined or overridden by external authorities (Ostrom, 1990). This means the legitimacy of agreed-upon institutions for AI governance at different levels must be recognized and upheld by prevailing external authorities, including national and subnational governments, to enable self-enforcement by the stakeholders.


DP #8: Nested enterprises. Global AI governance arrangements that rely on a single centralized governance mechanism at a global level will most likely not work (Dietz et al., 2003; Ostrom, 2009a, 2009b), especially considering the collective action problems already discussed. Instead, multilevel governance arrangements corresponding to the different scales at which AI governance issues manifest should be adopted. For this purpose, a polycentric order in which “many elements are capable of making mutual adjustments for ordering their relationships with one another within a general system of rules where each element acts with independence of other elements” seems more suitable (Ostrom, 1999, p. 57). Interconnected but independent institutions operational at different levels nested within themselves (e.g., supranational-national-subnational or global-regional-local) would prove more effective for governing AI developments collectively than a single centralized global AI governance institution. Such a system allows learning from experimentation and comparative assessments; trust building and responsiveness to context; and reduces opportunistic behavior (Ostrom, 2009a, 2009b). Governance mechanisms, as appropriate to a level, should be collectively determined by stakeholders at that level. The principle of subsidiarity—decentralizing issues to the lowest level of governance capable of handling them satisfactorily—(Marshall, 2008) should be embraced while maintaining cross-level interactions.

I have outlined the considerations for governing AI governance globally and collectively based on Ostrom’s design principles. There is no hard requirement to apply all these principles in designing institutions for global AI governance for them to be successful (Ostrom, 2009a, 2009b). However, global AI governance endeavors that adhere to most of the principles have a higher chance of being robust, enduring, and effective in resolving the collective action problem between the US, China, and the EU. Hence, ongoing global AI governance efforts by international organizations like the OECD, UN, Global Partnership on AI (GPAI), and other multilateral initiatives should consider incorporating most of these principles.

8 Conclusion

This paper reviewed the state of global AI governance, focusing on the non-cooperation between the three key players—the US, China, and the EU—which poses barriers to achieving global coordination on AI governance and different risks to humanity. The collective action literature holds useful insights for finding solutions to this situation. Framing the situation as a collective action problem, global AI governance can be classified as a common good. I have applied insights from Ostrom’s (1990) study of common-pool resources (CPRs) and Jagers et al.’s (2020) study on large-scale collective action to outline considerations for the design of institutions for the global governance of AI. The application of Ostrom’s eight design principles for CPRs combined with Jager et al.’s evaluation of facilitators and stressors to inform third-party interventions for large-scale collective action yields three major recommendations for current global AI governance efforts. First, while common agreements at a global level are important to set the goals for collective action, they will not be successfully realized by a single centralized global AI governance mechanism. Rather, a polycentric multilevel arrangement in which several AI governance mechanisms interact but are operationalized independently would make for a more robust, enduring, and effective solution to global AI governance, given the situation between the US, China, and the EU.

Second, enforcement is a key challenge of existing efforts targeting global coordination of AI governance, such as the UNESCO Recommendation on the Ethics of AI, OECD AI Principles, and Universal Guidelines for AI. Monitoring and sanctioning mechanisms collectively agreed upon by, mutually enforced by, and accountable to AI governance stakeholders should be adopted. Information provision is an essential function that could be supported by international organizations like the UN, OECD, and GPAI.

Third, easily accessible mechanisms for inclusive deliberation and conflict resolution should be made available to foster trust and sustain consensus, leverage conflict for learning, and inform the adaptation of governance mechanisms.

This paper is necessarily limited in terms of the specificity of governance mechanisms since I have focused more on the considerations for institutional design rather than specifying the institutions themselves. Future studies could build on these considerations by exploring how a polycentric multilevel arrangement of governance mechanisms could be implemented for global AI governance to address the non-cooperation between the US, China, and the EU. Others could determine which governance instruments and approaches are most suitable for operationalizing effective monitoring and sanctioning mechanisms, as described in this paper. Finally, further research could be conducted on how information provision should be structured in AI governance mechanisms for collective action and the potential roles of existing international organizations and multilateral initiatives.