Keywords

4.1 A Constitutional ‘Voice’ and ‘Bridge’

The rise and diffusion of social media has generated novel communicative spaces blurring the boundaries between public and private. On the one hand, social media represent inherently private spaces insofar as they are owned by private companies, and interactions within them are mainly regulated through private governance means such as contracts, terms of service, community standards or internal policies. On the other hand, some structural characteristics of social media, from the ease of access to interactivity and horizontal flow of communications, led many scholars to conceive them as “an infrastructure capable of revitalizing and extending the public sphere” (Santaniello et al. 2016) after its downfall as an effect of consumerism and the rise of mass media as depicted by Habermas (1992). Some researchers saw social networks as ‘third places’ that, like Habermasian ‘coffee shops’ in eighteenth century England, serve as a new, easily accessible forum for public life, promoting social interactions and political debate (Chadwick 2009; Farrell 2012).

Leaving aside the question of to what extent social media play a positive role in democracy by fostering participation, civic engagement and people’s empowerment against political and elite structures (Bimber et al. 2012), or rather they put at risk the democratic process favouring manipulation, extremism and polarisation (O’Connor and Weatherall 2019; Benkler et al. 2018), there is no question that they are increasingly relevant in forming the public opinion. Consequently, social media companies’ rules end up shaping the limit of what can be considered an acceptable exercise of the freedom of speech for billions of people carrying out de facto an intrinsically public function (Jorgensen and Zuleta 2020; Celeste 2021a). Additionally, social media platforms have the ability to blur the boundaries of the dichotomy between the public and private dimensions. As transnational companies, these platforms facilitate cross-border communication and contribute to softening frontiers and demarcations within and outside nation states as well as between jurisdictions and territories, thus making sovereignty claims more complex and uncertain (Celeste 2021b; Celeste and Fabbrini 2020). As Grimm (2016) pointed out, this twofold erosion of the state authority caused by transnational modes of governance brings a serious challenge to the constitutional order and guarantees. Constitutionalism, in its traditional sense, requires the “concentration and monopoly of public power that allows a comprehensive regulation” on a territory and the identification of a polity, acting as “pouvoir constituent”, and establishing forms of self-limitation in the exercise of public power (Santaniello et al. 2018).

The “constitutionalisation” of international law (De Wet 2006) has been proposed as the remedy to the pitfalls caused by transnational models of governance. This perspective, rather than looking for a “legitimatory monism and an unilateral form of law-production by a political subject” (Moller 2004, 335), focuses on “continuity, legitimatory pluralism and the spontaneous evolution of a legal order” (Idem). In such a view, some norms of international law may fulfil constitutional functions and then acquire a constitutional quality, integrating and verticalising the international order (Gardbaum 2008; De Wet 2006), in a kind of “compensatory constitutionalism” (Peters 2006) that completes and fills the gaps created by globalisation in domestic constitutional systems (Santaniello et al. 2018; Celeste 2022a).

Even if theories about the constitutionalisation of international law identify interesting tendencies and solutions to counterbalance the erosion of nation-state authority, nevertheless, they have little to say about how to safeguard constitutional guarantees and fundamental rights within transnational private (or mainly private) regimes carrying out public functions. International Human Rights Law is far from constitutionalising the international order, and as seen in the previous chapter, due to its state-centred design, it does not directly impact private actors that own and rule social media platforms. Furthermore, its generic formulation of principles and norms appears unfit to regulate a complex socio-technical environment such as platform content moderation which requires a rather granular and dynamic system of rules.

Then, the recent proliferation of civil society initiatives advocating for human rights standards on social media platforms may not be a mere coincidence. These efforts are part of a larger movement to articulate rights and principles for the digital age. The output of these initiatives often takes the form of non-binding declarations, intentionally adopting a constitutional tone and thus referred to as “Internet bills of rights” (Celeste 2022b). These declarations can be seen as expressions of the voice of communities that are seeking to redefine core constitutional principles in light of the challenges posed by digital society, resulting in a new form of “digital constitutionalism” (Santaniello and Palladino 2022). The growing number of civil society digital constitutionalism initiatives in the content governance field could be conceived as a reaction, on the one side, to the increasing power of social media platforms in shaping public opinion, and, on the other side, to the impracticability to directly apply international human rights law standard within platforms’ transnational private governance regime. These efforts may be seen as an attempt to bridge constitutional thinking with the everyday governance of social media platforms (Palladino 2021b).

In this regard, Gunther Teubner’s theory of societal constitutionalism can provide a sound conceptual framework to understand how civil society’s bills of rights can play this role. Based on Luhmann’s theory of social systems (1975) and the subsequent developments by Sciulli (1992) and Thornhill (2011), the German scholar moves his considerations starting from the dynamics of social differentiation. From this point of view, the more a social subsystem becomes autonomous, the more it develops ‘its own systemic logic based on a specific means of communication’ that makes possible and meaningful the interaction within the subsystem (such as the money in the economic subsystem and the law for the legal subsystem). As the activities of a subsystem become relevant to the social system as a whole, they give raise to what Teubner calls “expansionist” and “totalizing” tendencies (Teubner 2011, 2012), meaning that the subsystem can impose its logic on the other social spheres to reproduce itself, threatening the integrity and autonomy of individuals and communities.

According to this perspective, the rise of the Internet and digital technologies in our societies can be conceived as a process of autonomisation of an emerging digital subsystem. In the wake of Lessig (2006), we can identify in the code the communicative means of the digital subsystem, meaning by this not some programming language, but rather the socio-technical architecture which, by combining software, hardware and human components, makes the interaction between different social actors in the digital world possible, shaping their experience and disciplining their behaviour. While the code constitutes the means of communication of the digital subsystem, digitisation or datafication (George 2019) can be interpreted as its logic. The latter therefore consists of an incessant process of conversion of social reality into digital information in order to be further processed and elaborated by systems to extract new information with added value.

The constitutionalisation of a subsystem occurs when frictions with other social spheres bring out “fundamental rights” understood as “social and legal counter-institutions” (Teubner 2011, 210). This allows, on the one hand, to free the “potential of highly specialized dynamics” of the subsystem, and on the other hand, to institutionalise self-limitation mechanisms that preserve the integrity and autonomy of individuals and other social spheres (Teubner 2004, 12). From this point of view, fundamental rights perform both an inclusive function, guaranteeing universal access to the specific ‘means of communication’ of the subsystem and therefore to the related rule-making processes, and an exclusive function, in the sense of defining the boundaries of the subsystem’s sphere of action. A qualifying aspect of Teubner’s theory consists in the idea that fundamental rights can be constituted within a social subsystem only through a process of generalisation and re-specification—which means that their functions, to be effective, must take place in the ‘communication medium’ of the subsystem and inscribed in its operating logic. Furthermore, Teubner’s societal constitutionalism appears as a hybrid constitutionalisation process, in which the self-limitation of a subsystem is the result of the pressures, resistances and constraints posed by other social spheres.

These considerations indicate firstly, that for fundamental rights to be truly effective in the social media environment, they must be translated and incorporated into their socio-technical architecture, including programming, algorithms, internal policies and operational routines (Palladino 2021a, 2022). Secondly, they point out that limiting mechanisms for platforms cannot be based solely on forms of self-regulation nor on state regulation. Certainly, states can impose constraints on Big Tech, both through the means of ordinary legislation and by exercising a “shadow of hierarchy” on self-regulatory processes, threatening the imposition of heavy regulation on a sector if certain standards are not reached (Héritier and Lehmkuhl 2008). However, in order for these mechanisms to be effective and overcome the obstacles posed by the private, transnational and infrastructural nature of digital processes, they must be completed and accompanied by the joint action of a plurality of actors (Palladino 2021b).

Among the actors involved in this process of hybrid constitutionalisation, civil society organisations and their Internet Bills of Rights play a crucial role (Celeste 2019). In the first place, civil society carries out a fundamental ‘watch-dog’ function, documenting human rights violations by both states and corporations, giving a voice to common users, vulnerable groups and minorities, shedding light on the human rights implications of platform policies and functionalities, and new pieces of legislation. Secondly, civil society organisations can exert pressure on both states and companies to adopt proper instruments and mechanisms to comply with human rights standards, thus starting the above-mentioned process of generalisation and re-specification of fundamental rights for the digital world. Indeed, by drafting Internet Bills of Rights, NGOs and activists can draw on a consolidated corpus of norms and reflections elaborated in the international human rights law ecosystem and put them in the concrete context of social media platform reality. Moving from their expertise in human rights violations, civil society organisations could identify what kind of practices and operations need to be banned, fixed or introduced, defining the rules and operational practices for this purpose. Of course, this is just a first step in the process of translating human rights standards into the socio-technical architecture of platforms, which require further phases of elaboration by legislators, technical communities and platform owners themselves before becoming fully implemented arrangements. Nevertheless, it is a crucial step to foster state intervention and push companies to take into account their responsibilities, creating a convergence of expectations around a common normative framework. The more civil society organisations engage in global conversations and networking, the more likely it becomes for them to converge on a series of norms and practices for social media platforms. Insofar as it happens, civil society can facilitate the reach of a global standard, influencing both national legislation and companies’ practices. Of course, this kind of outcome cannot be taken for granted. Differences in cultural and political backgrounds or the social context in which they operate may lead different human rights defenders to pay more attention to some specific issues rather than others, to conceptualise the same problems differently or to prefer alternative approaches.

This chapter investigates to what extent civil society’s Internet Bills of Rights have been able to so far bridge international human rights law and platform governance, translating human rights standards into more granular norms for the social media platform environment. The examination will also consider the extent to which global civil society efforts converge on a shared normative framework, which has the potential to shape both state regulations and corporate policies and contribute to the development of a global standard.

4.2 Civil Society and Internet Bills of Rights

In order to investigate how civil society is contributing to the constitutionalisation of social media content governance, we performed a content analysis on a corpus of Internet Bills of Rights extracted from the Digital Constitutionalism Database. The Digital Constitutionalism Database is an online accessible and interactive resource resulting from the joint efforts of researchers taking part in the Digital Constitutionalism Network based at the Center for Advanced Internet Studies (Bochum, Germany).Footnote 1 The database collects more than 200 documents (Internet Bills of Rights; declarations of digital rights; resolutions, reports, policy briefs containing recommendations on digital rights), which are drafted by different kinds of actors (civil society organisations, parliaments, governments, international organisations, business companies, multistakeholder initiatives) from 1996 up to now, engaging with the broad theme of the exercise and limitation of power on the Internet and seeking to advance a set of rights, principles and governance norms for the digital society.

The Digital Constitutionalism Database has been analysed in order to select documents drafted by civil society groups discussing online content governance conceived as the set of rules and practices through which decisions are made about the hosting, distribution and display of user-generated content by Internet service providers. Since social media platform content moderation is a relatively recent issue, the broader concept of content governance has also been used in order to draw lessons from general principles coming from older documents and monitor trends over time.

A total of 40 documents were identified based on the established selection criteria. The geographic and temporal distributions of the selected documents are presented in Figs. 4.1 and 4.2, respectively. As shown, attention towards the relationship between content governance and digital rights has, not surprisingly, grown together with the rise of social media platforms from the second half of the 2000s. The majority of the documents in our corpus were generated by organisations that assert their transregional or global reach. These entities comprise coalitions of civil society groups from across the globe, such as the Association for Progressive Communications, and the Just Net Coalition, or individual civil society organisations that maintain offices in various continents with personnel and governing structures that reflect a variety of backgrounds, including ARTICLE 19, Access Now and Amnesty International. This circumstance may facilitate the emergence of a cohesive framework at the global level, given that these global civil society associations constitute an exercise in global networking that can synthesise diverse experiences, concerns and claims from various contexts. However, upon examination of the organisations that are more closely associated with a particular national or regional background, we observe that very few cases in our corpus originate from Africa and Asia. The reasons for this may be attributed to challenges in collecting documents drafted in non-European languages, resource constraints of these organisations and obstacles faced when operating in non-democratic countries.

Fig. 4.1
A world map titled, the global organizations 60%. The southern part of North America is 7, 5%. Europe is 12, 5%. The central part of Asia is 5%. The southern part of Africa is 2, 5%. Few islands in the South pacific oceans are labeled, 7, 5%.

Geographical distribution of the analysed documents

Fig. 4.2
A bar graph. The values are as follows. 1997 to 2000 is 3. 2001 to 2005 is 1. 2006 to 2010 is 8. 2011 to 2015 is 16. 2016 to 2020 is 13.

Distribution over time of the analysed documents

With regard to the content of the document in our corpus, constitutional and governance principles have been hand-coded with the NVIVO software resorting to an inductive methodology (Bazeley and Jackson 2013; Kaefer et al. 2015). In the first stage, principles have been coded as closely as possible as they appeared in the text. At a later stage, synonymous items have been merged as well as principles aggregated in a hierarchical system of broader categories. It is worth noting that this exercise of coding and categorisation faces an unavoidable degree of overlapping and redundancy. On the one hand, indeed, several principles detected in the texts could frame the issue in slightly different ways, or on the contrary, the same principle is employed highlighting different features of the same concept, or again, some principles could cover part of the semantic area of a broader one. On the other hand, the categories created to aggregate more close-to-text coding reflect the authors’ interpretative framework and are settled to emphasise distinctions and aspects deemed to be relevant by researchers. Besides the limits of the qualitative approach, redundancy appears to be a characteristic feature of digital rights itself, since “these rights and principles are more often than not interconnected, interdependent, mutually reinforcing, and in some cases even in conflict with one another” (Gill et al. 2015).

Table 4.1 provides a synthetic overview of the over 90 principles we detected in the corpus, organised and summarised into broader categories. The first one collects all the provisions explicitly concerned with international human rights law compliance. The other two categories distinguish between substantive and procedural principles, drawing on the distinction between substantive and procedural law. In this context, ‘substantive principles’ refer to people’s expected behaviour according to accepted social norms as well as their basic human rights such as life and liberty. In this case, more specifically, substantive standards for content governance indicate people’s rights and responsibilities related to the creation and publication of online content. By contrast, ‘procedural principles’ indicate formal rules and procedures through which substantive rights are created, exercised and enforced. In this case, more specifically, procedural standards indicate the rules through which decisions about users’ contents are made, including the rulemaking process itself (Main 2010; Alexander 1998; Grey 1977).

Table 4.1 Civil society initiatives

In the first instance, data seems to outline a common framework, suggesting a remarkable degree of consensus among our sample of civil society initiatives on a shared set of principles to be applied to content governance. Civil society initiatives analysed in the corpus strongly rely on human rights law. Half of our sample refers explicitly to one of the international human rights law instruments discussed in the previous section (especially ICCPR, UDHR, Ruggie principles), or more generally, claims for the respect of international human rights standards. However, even when not quoted explicitly, the documents we analysed refer to rights, principles and standards drawn from the international human rights literature. Almost all the civil society charters (39 out of 40) deal with some substantive principles, mostly freedom of expression (38 out of 40). Three other categories of substantive principles, namely ‘prevention of harms’, ‘protection of social groups’ and ‘public interest’, mostly set the borders of acceptable exceptions of freedom of expression that justify content removal. Moreover, 34 out of the 40 documents analysed, mention some procedural principles, in particular those related to the rule of law (24), good governance principles (19) or procedural principles specifically tailored for social media platforms (21). Taken as a whole, procedural principles specify a series of conditions and requirements to exercise content moderation in a legitimate and rightful manner.

The convergence of civil society around the same framework appears more evident if we look at trends over time. For this purpose, we grouped the detected principles into five categories: (1) freedom of expression; (2) freedom of expression limitations, including ‘prevention of harm’, ‘protection of social groups’ and public interest; (3) intermediary liability; (4) rule of law; (5) other procedural principles, for the most part related to social media platform governance. Figure 4.3 shows that, while freedom of expression consistently remains the primary concern of civil society, when competing issues arise that may potentially curtail freedom of expression (e.g. hate speech, discrimination and child protection), such concerns are typically accompanied by demands for procedural rules to govern content moderation and prevent its use as a tool for undue censorship. It is noteworthy that in recent years, there has been a shift in focus from requesting states to adopt a legal framework to establishing rules and procedures directly targeted at social media platforms and private companies. This last step seems to indicate that civil society organisations matured enough experience and knowledge to start the process of translation of international human rights standards into a complex of norms and mechanisms to be embedded within the socio-technical architecture of social media platforms.

Fig. 4.3
A double-bar graph depicts freedom of expression, F o E limitations, intermediary liability, rule of law, and other procedures. Freedom of expression was high from 2011 to 2015 at 13, and other procedural is from 2016 to 2020 at 13. The values are approximate.

Trends over time

4.3 Defining Substantive Rights and Obligations

4.3.1 Avoiding the Traps of Intermediary Liability

Substantive law constitutes the segment of the legal system that outlines the rights and obligations of individuals and organisations. It also pertains to the rules establishing the legal outcomes that arise when such rights and obligations are breached. A preliminary question to address when reasoning about substantive principles for social media content governance concerns the responsibilities of intermediaries and Internet service providers dealing with user-generated content. This question arose in the 1990s with the massification and commercialisation of the Internet and found a first answer in Section 230 of Title 47 of the 1996 US Telecommunication Act, which states that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider”. Section 230 conceived service providers not involved in content production as mere conduit or passive intermediaries, and therefore settled a different liability regime compared with traditional media carrying out editorial tasks. The decision aimed at safeguarding the then-nascent Internet service provider market against legal risks and external interferences that could have discouraged innovation and investment in this new sector (Bowers and Zittrain 2020).

However, in the following paragraph, the act affirms that:

No provider or user of an interactive computer service shall be held liable on account of—any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.

As Gillespie observed, in so doing the US legislator allowed providers to intervene “on the terms they choose, while proclaiming their neutrality as a way to avoid obligations they prefer not to meet”, “without even being held to account as publishers, or for meeting any particular standards” of effective policing for how they do so, and ultimately ensuring them with “the most discretionary power” on content governance (Gillespie 2018, 30–33).

This “broad immunity” model has been a cornerstone for content moderation in the Western world. However, not even the US “broad immunity” model can be considered a full immunity, due to the existence of particular exceptions, most notably copyright infringements under the Digital Millennium Copyright Act (DMCA). Other Western countries seem to be converging towards a “conditional liability” model, largely influenced by the European Directive on e-Commerce,Footnote 2 according to which immunity is provided to intermediaries insofar as they have no “actual knowledge” of illegal content and they comply with the so-called notice and takedown procedure, timely removing illicit thirdparty content in compliance with state authorities or court requests (MacKinnon et al. 2015). In some contexts, notifications by users too are deemed to presume actual knowledge by intermediaries in case of “manifestly illegal content” (CoE 2018).

However, civil society organisations claim that conditional liability, in the absence of a formal legal framework specifying clear rules and safeguards, will not limit the degree of arbitrariness with which platforms police users’ content, but rather, could lead to ‘voluntary’ proactive measures and over-removals. In particular, the recent German Network Enforcement Act, or NetzDG, has been particularly criticised for its overbroad definition of unlawful content and the disproportionate sanctions for platform administrators in case of non-compliance, resulting in a delegation of censorship responsibilities to social media companies induced to err on the side of caution and undermine due guarantees.Footnote 3 Over time, civil society has developed a nuanced understanding of intermediary liability, seeking to avoid both the traps of platforms’ arbitrariness stemming from broad immunity and the incentivisation of unlawful over-removal associated with conditional liability regimes.

Internet Bills of Rights define what intermediaries’ rights and obligations are, focusing on three main points. First, they reiterate the principle according to which “no one should be held liable for content on the Internet of which they are not the author” (African Declaration Coalition 2014) and intermediaries should then be protected by a safe harbour regime against states pressures to undertake censorship on their behalf by imposing, de jure or de facto, a general monitoring obligation. Second, they “oppose full immunity for intermediaries because it prevents them from holding any kind of responsibility, leaving victims of infringement with no support, access to justice, or appeal mechanisms” (Access Now 2020), and thus, they propose exceptions to the safe harbour regime for cases in which intermediaries fail to comply with a court or other adjudicatory body’s order to remove content, or do not take any action after being properly notified about potential illegal and harmful content. In this regard, advocacy groups also proposed alternatives to the usual notice and takedown procedure. They suggested implementing a ‘notice-wait-and-takedown’ procedure, which would require intermediaries to forward notices about alleged harmful or illegal content to users, giving them the opportunity to modify or remove the content themselves or object to the notice before the content is removed. Another proposed alternative is a ‘notice-and-notice’ procedure, under which Internet service providers are only legally required to forward notifications to alleged infringers. Third, in order to limit the arbitrary nature of platforms’ content moderation, even in the case the latter police users’ content on their own initiative, civil society organisations stated that platforms should adhere to clear rules and standards grounded in international human rights law. This issue is addressed in further detail in the section below on procedural principles. At the same time, in a couple of cases (Access Now 2020; APC 2018), civil society organisations contested the assumption at the basis of the current intermediary liability regime, arguing that content curation practices through which platforms foster user engagement put into question their role as passive intermediaries. However, in the words of the Association for Progressive Communication, this does not mean making “platforms such as Facebook legally liable for the content carried on the platform, but there is a clear need for more transparency and accountability in how they manage and manipulate content and user data” (APC 2018).

4.3.2 The Centrality of Freedom of Expression

Reasoning about content governance in terms of substantive principles means questioning the fundamental values that must be promoted and protected when we communicate through social media platforms. In this regard, civil society organisations show a clear stance. They consistently prioritise freedom of expression as the primary concern when addressing issues related to content governance in most of the documents we analysed.Footnote 4 This is not surprising considering that, since the beginning, freedom of expression has been a cornerstone of any attempts to safeguard human rights and establish constitutional principles for the digital sphere (Gill et al. 2015; Kuleska 2008). Freedom of expression constitutes for civil society the ‘lens’ through which it is possible to frame content moderation, meaning that the other detected content moderation principles, for their vast majority, just set the boundaries of permissible limitations on freedom of expression. This approach to content moderation differs from the approach taken by many nation states and incorporated in most of the platform terms and conditions or community standards, which often prioritise the definition and prohibition of non-acceptable content and behaviours.Footnote 5

The difference could appear trivial, but it is of crucial importance. In the former case, the focus is entirely on the protection of freedom of expression, which could be exceptionally constrained only under narrowly defined conditions grounded on international human rights law standards. In the latter case, the focus is instead on the content to be removed in the name of a “public health” interest, “establishing accountability for concrete harms arising from online content, even where addressing those harms would mean limiting speech” (Bowers and Zittrain 2020). Not by chance, in the civil society charters, statements on freedom of expression are often coupled with a call to international human rights standards’ compliance. Furthermore, most of the time, freedom of expression is put at the centre of a human rights and democratic value system, as exemplified in this excerpt from the Principles on Freedom of Expression and Privacy drafted by the Global Network Initiative (GNI 2018):

Freedom of opinion and expression is a human right and guarantor of human dignity. The right to freedom of opinion and expression includes the freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers. Freedom of opinion and expression supports an informed citizenry and is vital to ensuring public and private sector accountability. Broad public access to information and the freedom to create and communicate ideas are critical to the advancement of knowledge, economic opportunity and human potential.

The right to freedom of expression should not be restricted by governments, except in narrowly defined circumstances based on internationally recognized laws or standards. These restrictions should be consistent with international human rights laws and standards, the rule of law and be necessary and proportionate for the relevant purpose.

4.3.3 Setting the Boundaries of Freedom of Expression

As previously mentioned, the other substantive principles outlined in the civil society charters largely define the limits of acceptable exceptions to freedom of expression, providing justification for the removal of specific content. The first set of principles refers to the protection of harm and has been detected in 16 documents. According to these texts, “certain very specific limitations to the right to freedom of expression may be undertaken on the grounds that they cause serious injury to the human rights of others” (IRPC 2010, 2015), as well as to their reputation and dignity, or when they “involve imminent danger to human beings” (EDRi 2014), including the cases of harassment, cyberbullying and incitement to violence. Some other principles could be gathered under the label of “protection of social groups”, since they aim to protect vulnerable or marginalised groups and ensure the full enjoyment of their rights. This kind of principle has been coded in 13 documents.

From a conceptual perspective, the protection of particular social groups could be broken down into three different modalities. First, we have principles providing that Internet operators will not discriminate against content on the basis of users’ race, colour, sex, language, religion or other status. This principle is particularly relevant in content moderation since platform policies and automated tools end up disproportionately impacting more vulnerable and marginalised groups (APC 2018). A second group of principles aims to guarantee a safe environment for particularly vulnerable groups, especially children, which should be protected from exploitation and troubling or upsetting scenarios online. Third, we have principles calling to remove content which is discriminating or incites hostility and violence against minorities, vulnerable and marginalised groups, namely hate speech. A last cluster of cases (six documents) refers to restrictions of freedom of expression based on some ideas of public interest, often recalling the ICCPR, which mentions in this regard “the protection of national security or of public order, or of public health or morals”. More recent civil society charters are also considering fake news and disinformation issues. It is worth noting that civil society organisations usually refer to the above-mentioned freedom of expression exceptions in very general terms. They appear reluctant to provide criteria identifying the cases requiring content moderation and call states to carry out this duty following human rights standards. As discussed in more detail in the next paragraph, once identified possible freedom of expression exceptions, civil society organisations right after specify that those exemptions must follow international human rights standards and procedural rules. Moreover, looking at their frequency, it seems that civil society is more likely to recognise legitimate freedom of expression exceptions when they deal with other individual rights, or behaviours capable of concretely impacting individual integrity and dignity, rather than in cases involving more abstract and collective values that could be more easily employed to allow undue censorship.

4.4 Limiting Platforms’ Arbitrariness Through Procedural Principles

4.4.1 A Rule of Law Regime

From the perspective of civil society, procedural principles have a crucial relevance because they guarantee that substantive rights competing with freedom of expression, whatever they may be, are not misused or abused resulting in undue censorship practices affecting people’s fundamental rights. Most of the civil society efforts to provide social media content governance with procedural safeguards and guarantees could be collected under the “rule of law” label. It would be overly simplistic to define the rule of law as a single principle. It can be understood as a multifaceted concept encompassing both a political philosophy and a series of mechanisms and practices, aiming at preventing the arbitrary exercise of power by subordinating it to well-defined and established rules and affirming the equality before the law of all members of a political community, including and foremost, the decision-makers (Walker 1988; Choi 2019). The rule of law “comprises a number of principles of a formal and procedural character, addressing the way in which a community is governed” (Waldron 2020), and which entail basic requirements about the characteristics of law, how it should be created and enforced. Laws should be accessible to all, general in form, and universal in application. Legal standards should be stable and legal responsibilities should not be imposed retrospectively. Moreover, laws should be internally consistent and provide for legal mechanisms to solve eventual conflicts between different norms.

The rule of law also implies the institutional separation between those who establish and enforce the law. Laws should be created or modified according to pre-established rules and procedures by bodies that are representative of those that will be affected by them. Furthermore, law should be applied impartially by independent judicatory bodies. According to Article 14 ICCPR, everyone charged with a criminal offence shall be entitled to due process and fair trial, entailing minimum guarantees, including among others “to be informed promptly and in detail in a language which he understands of the nature and cause of the charge against him; to have adequate time and facilities for the preparation of his defence and to communicate with counsel of his own choosing; to be tried without undue delay”. Finally, public decisions and the law itself must be subject to judicial review to ensure that decision-makers are acting in accordance with the law, first and foremost constitutional and human rights law.

Civil society attempts to establish a ‘rule of law’ regime for Internet content governance focused on the request to establish a proper legal framework. The adoption of an accessible legal framework, indeed, with clear and precise rules, provides both Internet intermediaries and online users with legal certainty and predictability, ensuring that everyone is fully aware of their obligations and rights and is able to regulate their conduct properly. Above all, a sound legal framework is also a guarantee against the eventuality that constitutional safeguards are circumvented by outsourcing online content moderation adjudication and enforcement to private entities through opaque and non-human-rights-compliant terms of service, secretive agreements or codes of conduct. Furthermore, civil society groups particularly claim that states must not impose a ‘general monitoring obligation’ to intermediaries, conceived as a “mandate to undertake active monitoring of the content and information that users share […] applied indiscriminately and for an unlimited period of time” (Access Now 2020, 24). Human rights defenders fear that encouraging a ‘proactive’ content moderation will lead to “over-removal of content or outright censorship” (APC 2018; ARTICLE 19 2017; EDRi 2014). In doing so, civil society groups recall the warnings advanced by the UN Special Rapporteur, David Kaye, in his 2018 Report on the promotion and protection of the right to freedom of opinion and expression (Kaye 2018).

According to civil society organisations, the legal framework for content moderation should at its minimum:

  1. 1.

    Provide clear definition of harmful and illegal content and of the conditions under which freedom of expression could be limited by law, through democratic processes and according to international human rights law standards.

  2. 2.

    Clearly establish under which conditions intermediaries are deemed responsible for user-generated content, and which kind of actions they must undertake. This also includes the conditions according to which an intermediary is supposed to acquire ‘actual knowledge’ of any infringing content. A legal framework should clarify which different duties, obligations and procedures stem from court orders, government requests, private notifications and flagging.

  3. 3.

    Encompass the content removal procedures including the timeframe for the different phases of the process; the obligation to notify users about content takedown.

  4. 4.

    Guarantee appropriate judicial oversight over content removal and the right to legal remedy, including the obligation to notify users about content takedown and provide them with all the necessary information to object against the removal decision.

The demand for the establishment by states of a legal framework for content moderation corresponds to a mirror request addressed to social media companies not to proceed with removing content unless prescribed by law, and in any case, trying to protect and promote human rights. For example, where requested by government to take actions that may result in a violation of human rights, companies should “interpret government demands as narrowly as possible, seek clarification of the scope and legal foundation for such demands, require a court order before meeting government requests, and communicate transparently with users about risks and compliance with government demands” (African Declaration Coalition 2014).

Another commonly referred procedural principle (mentioned in 15 papers) is the test of necessity and proportionality, which, together with the prescription by law and the pursuing of legitimate aim, is part of the international human rights standards for permissible freedom of expression limitations. According to ARTICLE 19, necessity requires “to demonstrate in a specific and individualised fashion the precise nature of the threat to a legitimate aim, […] in particular by establishing a direct and immediate connection between the expression and the threat identified”, while proportionality means that “the least restrictive measure capable of achieving a given legitimate objective should be imposed” (ARTICLE 19 2018).

4.4.2 Good Governance Principles

Besides the rule of law, civil society organisations proposed other procedural principles, which do not necessarily relate to the legal system itself, but which could be considered ‘good governance’ principles.

Transparency is recalled in 16 charters, and it is deemed crucial to achieve good content governance standards. According to the Association for Progressive Communication, “increased transparency is needed in a number of areas in order to better safeguard freedom of expression against arbitrary content removals and to better understand how the content viewed online is being moderated” (ARTICLE 19, 2018), while Access Now points out that “transparency is a precondition for gathering evidence about the implementation and the impact of existing laws. It enables legislators and judiciaries to understand the regulatory field better and to learn from past mistakes” (Access Now 2020). A consideration that could also be extended to private policies. If the adoption of an accessible legal framework is considered a basic transparency requirement for states, similarly, private companies are called to make their internal content moderation rules and procedures public in order to make content decisions predictable and understandable to users. Especially in the case of automated systems of content moderation and curation, full transparency is required in order to allow independent assessment, monitoring and evaluation.

Furthermore, civil society requires companies to timely provide users with all the information about the content moderation process in which they are involved. Both states and companies are required to report about content removal activities in a regular and public manner. Governments are asked to disclose information about all their requests to intermediaries that result in restrictions of freedom of expression. Companies are called to publish data about content removal, including both those following governmental requests and their own terms of services. Moreover, accountability is frequently mentioned among good governance principles. However, it tends to overlap with transparency, or appeal and remedy procedures, while it is almost totally lacking the reference to some kind of oversight mechanisms capable of reviewing platforms’ rules and procedures against external independent bodies.

A discrete number of documents (13) call for participatory rule-making and decision-making in both public and private spheres. Some of them express this concept in very general terms, which include content governance even if not specifically tailored for this purpose. According to them, both public and private governance processes should be open, inclusive and accountable, allowing for the meaningful participation of everyone affected and “expand[ing] human rights to the fullest extent possible”.Footnote 6 Among those who directly faced content moderation regulation, it is possible to observe some differences. Some actors, such as EDRi and the “Community input on Christchurch call” place greater emphasis on “democratic” governance, meaning that responsibilities for speech regulation rely on democratically elected bodies, and they must not be outsourced to companies in order to ensure legal and constitutional safeguards. Some others, like Article 19, are worried that state intervention will pressure companies towards forms of over-removal, and are more favourable to self-governance arrangements, once provided that they are informed on international human rights standards and open to stakeholder participation.

4.5 Embedding Human Rights Standards into Platform Socio-Technical Design

In the last few years, civil society organisations seemed to move forward on the road of digital constitutionalism by contextualising and adapting the very general international human rights standards into more granular norms and rules to be implemented in the platform environment.

4.5.1 Transposing the Rule of Law

Most of the efforts of civil society in this regard have been devoted to generalising and respecifying a ‘rule of law’ regime for the social media platform context. In the first place, platforms are asked to provide a degree of certainty and predictability for their content moderation rules and procedures through accessible terms of service and community standards which is comparable with the one ensured by law in order to “to enable individuals to understand their implications and regulate their conduct accordingly” (Article 17). Content moderation rules and procedures must be publicly available, easily accessible, delivered in the official language of the users’ country, and written in a plain language, avoiding obscure references to technical or legal jargon. However, social media companies should be transparent about the laws and regulations they follow, and they should inform individuals about situations in which the company may be required to comply with state requests or demands that could affect users’ rights and freedoms (ARTICLE 19 2017; APC 2018).

An important element in the attempt to establish a ‘rule of law’ regime within the social media platform ecosystem relates to appeal and remedy procedures, which should reproduce some of the due process and fair trial rights’ guarantees. It is worth noting that according to these civil society organisations, the establishment of these procedures does not prevent or limit users from resorting to traditional legal means; it rather introduces a further faster, more affordable and more immediate channel to claim their rights. According to the Santa Clara Principles (ACLU 2018), “companies should provide a meaningful opportunity for timely appeal of any content removal or account suspension, whose minimum standard includes: ‘i) human review by a person or panel of persons that was not involved in the initial decision; ii) an opportunity to present additional information that will be considered in the review; iii) Notification of the results of the review, and a statement of the reasoning sufficient to allow the user to understand the decision’”.

Companies are also requested to provide remedies, such as: “restoring eliminated content in case of an illegitimate or erroneous removal; providing a right to reply; with the same reach of the content that originated the complaint, offering an explanation of the measure; making information temporarily unavailable; providing notice to third parties; issuing apologies or corrections; providing economic compensation” (Access Now 2020). In particular, social media companies are expected to provide notice to each user whose content has been subject to moderation decisions, recalling well-known due process principles according to which courts cannot hear a case unless the interested party has been given proper notice. This notification must include at its minimum:

  1. 1.

    The indication of the alleged harmful or illegal content, which must be made accessible to the content provider by reporting it in the notification entirely, or including at least relevant excerpts, or by providing the URL or other information allowing for its localisation.

  2. 2.

    The specific clause of the terms of service, guidelines, community standards or law that has been allegedly violated. If the content has been removed as a result of a legal order or at the request of a public authority, the notification should also include the allegedly infringed law, the issuing authority and the identifier of the related act.

  3. 3.

    Details about the methods used to detect and remove the content, such as user flags, government reports, trusted flaggers, automated systems or external legal complaints.

  4. 4.

    An explanation of the content provider’s rights and the procedures for appealing the decision or seeking legal review and redress.

Notification should go along with the ability for content providers to revise their posts in order to prevent or overcome content removal decisions, and to submit a counter-notification when they believe that their content was removed in error, explaining their reasoning and requesting that the content be restored. Counter-notifications should be considered a key element of the appeal procedures and a broader ‘right to defence’ in the content moderation context.

However, as stated in the previous section, the rule of law also implies that rules are established or modified through a democratic decision-making process, in turn, shaped by well-defined and pre-established rules. Furthermore, according to the rule of law, there should be some kind of institutional separation between those who create, execute and adjudicate rules and decisions. None of this exists in the social media platform environment. Content moderation rules are typically created by social media platforms’ legal and policy teams, which are accountable to their top management and shareholders rather than to the affected communities, and when experts and stakeholders are involved, this occurs in a mere consultive role. Internal policies could be easily modified or dismissed according to companies’ interests and leadership views. Content removal decisions are taken by platforms’ employees with no guarantee of independent judgement, appeal or review.

It appears evident that this degree of arbitrariness in content governance significantly undermines platforms’ efforts to transpose a rule of law regime into their governance structure. For this reason, some civil society organisations proposed to create independent self-regulatory bodies entrusted with the duty to define content moderation criteria and oversee their application. In particular, Article 19 (2018) suggested establishing an ad hoc ‘Social Media Council’ following the example of previously successful experiences such as press councils. Its independence from any specific platform, as well as its accountability and representativeness, should be guaranteed by a multistakeholder governance structure. The Council should adopt a Charter of Ethics for social media consistent with international human rights standards; draft recommendations clarifying the interpretation and application of ethical standards; review platforms’ decisions under the request of individual users with the faculty to impose sanctions in the case of unethical behaviour violating the Charter. As part of their membership within a Social Media Council, platforms would have to commit to making their content moderation practices auditable by the Council, provide it with economic resources on a long-term basis and accept Council decisions as binding. This would help to ensure that the Council is able to effectively oversee and regulate the platform’s content moderation practices and that the platform is accountable for its actions.

4.5.2 Human Rights by Design

At the beginning of this chapter, we stated that any attempt to constitutionalise online content governance, in order to be effective, needs to embed fundamental rights into the socio-technical design of social media platforms. In the last few years, civil society organisations are becoming increasingly aware of this need and, not by chance, they are asking more frequently that platforms adopt a human rights by design approach. This consists of incorporating human rights considerations into the design and development of a platform from the very beginning(or one of its tools, applications or other components), rather than trying to address fundamental rights violations and abuses at a later stage when the platform has already been launched and scaled-up (Access Now 2020; Reporters Sans Frontiers 2018; ARTICLE 19 2017). Besides developing clear and specific policies and guidelines for content moderation based on human rights standards, this approach also implies their integration into the platform’s user experience and underlying technical infrastructure. It includes providing training and support to the platform’s users and moderators on the use of the platform itself and its content moderation policies, monitoring their effectiveness and making adjustments as needed to ensure that they are achieving their intended goals.

Embedding human rights standards into platforms’ socio-technical design means translating and implementing them into organisational arrangements, management systems and technical specifications. This task is entrusted to the platforms themselves and the technical community; however, civil society organisations can play a key role by pressuring for the adoption of a human rights by design approach and by monitoring the effectiveness of the implemented arrangements. Moreover, civil society can give a specific contribution to constitutionalising social media by developing and promoting the adoption of instruments such as human rights impact assessments and human rights due diligence (Access Now 2020; APC 2018; Reporters Sans Frontiers 2018), through which social media platforms can scrutinise on an ongoing basis their policies, products and services with the consultation of third-party human rights experts in order to evaluate their impact on human rights. Companies are also called to share information and data with researchers and civil society organisations and to support independent research. In so doing, organisations can gain a better understanding of the potential impacts of their practices on human rights, develop strategies for addressing any negative externality and ensure that their content moderation practices are consistent with their human rights obligations.

4.5.3 Automated Content Moderation

Most of the discussion on how to embed constitutional principles for content governance within the socio-technical infrastructure of social media platforms has been focused on the specific topic of automated content moderation. The latter could be defined as the employment of algorithms and artificial intelligence in order to “classify user-generated content based on either matching or prediction, leading to a decision and governance outcome (e.g. removal, geo-blocking, account takedown)” (Gorwa et al. 2020, 3). Although the use of automated moderation systems is seen as essential for addressing increasing public demands for social media platforms to take greater responsibility for the content on their platforms, it introduces a further dilemma for content governance. On the one hand, automated systems allow facing the scale and pace of communication flows on social media platforms. On the other hand, ensuring the rule of law and human rights requires that decisions must be taken on an individual basis according to a series of procedures and guarantees. Civil society’s main concern here is that automated content moderation may result in “general monitoring” practices (Access Now 2020), raising serious human rights concerns, both for freedom of expression and for privacy. Additional concerns have been raised regarding the accuracy, fairness, transparency and accountability of the process, due to certain technical factors.

Automated content moderation systems could be distinguished between matching and classifying methods to identify harmful and illegal content. The formers generate a unique identifier, or ‘hash’, for each digital content uploaded on the platform and then compare them to a database of already known hashes, related, for example, to copyrighted materials, content ordered to be removed by a public authority or previously classified as harmful or illegal. The latter use machine-learning algorithms to analyse digital content in order to automatically detect and classify content that violates certain rules or policies. To this purpose, machine-learning algorithms are trained on large datasets of digital content that have been previously labelled as harmful or illegal. The algorithm would then use this training data to learn the characteristics of content that is likely to be inappropriate. Once the algorithm has been trained, it can be applied to new content to automatically classify it as appropriate or inappropriate. Both methods have proven to be unable to distinguish contextualised uses of language or language nuance, such as irony, sarcasm, contents reported to denounce their inappropriateness or on the contrary covert threat, leading to both systemic false positive and false negative classifications. Furthermore, civil society organisations pointed out that filtering techniques such as hash-matching, which remove content before they are uploaded, may deprive civil society, academics and law enforcement of a precious trove of evidence to identify and prosecute human rights abuses (APC 2018).

The most relevant issues, however, are posed by machine-learning classifications. One of the major challenges is the potential for biases at the various stages of the machine-learning pipeline. These biases can manifest in a number of ways, such as through the over-or underrepresentation of certain types of content in training datasets, the reflection of cultural, linguistic or political prejudices in labelling or the introduction of bias during the phases of data processing, feature engineering or model hyperparameter setting by developers. Additionally, external factors such as adversarial or poisoning attacks can also introduce bias into the system. These sources of error can lead to a systematic disparate impact that disproportionately affects certain social groups or types of content. Machine-learning content moderation also poses relevant issues in terms of accountability and transparency (Smith 2020; Pasquale 2015). Especially when deep learning algorithms are employed, understanding and explaining how moderation decisions are made may be challenging or impossible even for the same people who created them (Palladino 2022). Deep learning algorithms are complex and hierarchical, with multiple layers of interconnected nodes that process and analyse initial input data into more and more complex mathematical functions unintelligible to the human mind. Furthermore, these systems evolve and adapt over time as they analyse new cases undermining the possibility of providing certainty and predictability for content moderation decisions.

The aforementioned concerns raise doubts as to whether automated content moderation is compatible with the rule of law. The Global Forum for Media Development, in its Statement on the Christchurch Call,Footnote 7 stated that automated content removal “cannot currently be done in a rights-respecting way” and advocated for the rejection of “unaccountable removal of content” and “incentives for over-removal of content” (Global Forum for Media Development 2019). Similarly, the Zeit foundation, in its Charter of Digital Fundamental Rights of The European Union, affirmed, “Everyone has the right not to be the subject of computerised decisions which have significant consequences for their lives”, and added, “Decisions which have ethical implications or which set a precedent may only be taken by a person”. Even when admitted, automated content governance should undergo specific limits and conditions. According to Access Now, “the use of automated measures should be accepted only in limited cases of manifestly illegal content that is not context-dependant, and should never be imposed as a legal obligation on platforms” (Access Now 2020, 26), for example, sexual abuses against minors, while in the other cases algorithms can be used to flag suspicious content but the final decision should be taken by human operators. However, one should consider the automation bias, which is the tendency for humans to over-rely on automated systems, even when they may not be the best decision-making tool.

In any case, individuals should be notified when automated systems are being utilised for the policing of their content, and they have to be afforded the opportunity to request a human review of such decisions. Furthermore, companies should be required to provide an explanation of the ways in which automated detection is used across different categories of content, as well as the reasoning behind any decisions to remove said content. By and large, civil society associations ask companies to tackle well-known accuracy, transparency, accountability and fairness raised by automated content governance by adopting a human rights by design approach putting constitutional standards at the centre of the design, deployment and implementation of artificial intelligence systems (Palladino 2021a, 2022). Automated systems should comply with transparency requirements, providing as far as possible accessible explanations on their functioning and the criteria employed for their decisions, as well as information about procedures behind and beyond the application, including appeal and remedy mechanisms.

According to the Santa Clara Principles, companies should ensure that their content moderation systems “work reliably and effectively”, pursuing “accuracy and non-discrimination in detection methods, submitting to regular assessments”, and “actively monitor the quality of their decision-making to assure high confidence levels, and are encouraged to publicly share data about the accuracy of their systems”. Civil society organisations recommend that the quality and accuracy of automated content moderation systems must be assessed through third-party oversight and independent auditing. Therefore, such systems must be designed to allow such an external scrutiny by means of proper traceability measures and documentation.