Keywords

4.1 Introduction

By early 2019, social media platforms had started to make some tentative changes to their content moderation policies around vaccine-related health misinformation in response to the measles outbreaks in the USA (DiResta and Wardle 2019). However, the COVID-19 pandemic created an unprecedented situation where stronger action was required. As a result, many of the platforms instituted a range of new policy changes designed to mitigate the impact of COVID-19-related misinformation. Although these policy changes have resulted in key anti-vaccine misinformation accounts being de-platformed, as well as egregious falsehoods being labelled or removed, health misinformation remains a problem on all platforms (Krishnan et al. 2021).

Health misinformation is not only a platform issue. The past two years have demonstrated the impact of low-quality research, as well as the spreading of conspiracy theories by political elites, particularly when these are amplified through newspapers, television networks, and radio stations. In parallel, health misinformation continues to proliferate through conversations around the dinner table and at the school gate. In this chapter, we will focus on explaining the complexity of the current information environment and the challenges that have been exposed during the COVID-19 global public health crisis for those working to mitigate the impact of the infodemic.

4.2 The Information Environment

The information environment is a term used frequently to describe the infodemic but with no clear or agreed definition. There are a number of characteristics within the concept, however, that are critical to an understanding of the current crisis. Firstly, information is transferred through communication, which can be understood through answering five questions: Who? Says what? In which channel? To whom? With what effect? (Lasswell 1948). Certainly, the final question is very difficult to measure (as we discuss below) and, as a result, sweeping generalisations are too often made about the impact of different messages. In order to capture these five questions, Neil Postman (1970) used the metaphor of a media ecology, focusing on understanding the relationship between people and their communications technologies through the study of media structures, content, and impact. More recently, Luciano Floridi (2010) attempted to emphasise the ways in which the information environment constitutes ‘all informational processes, services, and entities, thus including informational agents as well as their properties, interactions, and mutual relations’ (p. 9, emphasis in original).

Terminology around the problem of misinformation has also multiplied and two major analogies are frequently used: that of information warfare (Schwartau 1993), using militarised language and metaphors; and that of information pollution (Phillips and Milner 2021). Wardle and Derakhshan (2017) coined the term ‘information disorder’ as a way of capturing the different characteristics of the current information environment, with an emphasis on types, elements, and phases.

The modern information environment is complex. In 2022, 62.5% of the world’s 7.9 billion people are reported to be internet users and 58.4% of the world population are reported to be using social media (We Are Social 2022). Media outlets are increasingly using paywalls for their business models, and artificial intelligence and advertising to grow their audience and attract traffic to their content (Reuters Institute 2022). Audiences are also increasingly using closed messaging apps to consume and share news. In Brazil alone, 38% of the population uses WhatsApp to share news (Kalogeropoulos 2021).

The COVID-19 pandemic struck in this complex information environment and, as a result, created an infodemic (WHO 2020). With the uncertainty that came with the pandemic, alongside the increasing public demand for information, conspiracy theories and misinformation found fertile ground in which to flourish. One data point for the scale of misinformation is the work done by fact-checkers during this period. The Coronavirus Facts Alliance, coordinated by the International Fact Checking Network, produced more than 16,000 fact-checks in over 40 languages, covering more than 86 countries since the onset of the pandemic (Poynter 2022).

The real-world harm of these rumours and falsehoods quickly became clear when claims started to lead to property damage, serious injury, and loss of life. In Iran, misinformation directly led to the death of a number of people who drank toxic methanol thinking it would protect them from COVID-19 (AlJazeera 2020). In Nigeria, the USA, and a number of countries in South America, cases of chloroquine poisoning were reported, linked to the statement by former US president Donald Trump that had said it could treat COVID-19 (Busari and Adebayo 2020). In the UK, the Republic of Ireland, and the Netherlands, numerous 5G towers were torched because vigilantes believed they were spreading the coronavirus (AP News 2020).

It is important to remember that misinformation has impeded public health responses in the past. For example, in 2003, there was a boycott of the polio vaccine in five northern Nigerian states because it was perceived by some religious leaders to be a plot to sterilise Muslim children (Ghinai et al. 2013). That action led to one of the worst polio outbreaks on the continent and set back wild polio eradication in Africa by nearly two decades. The information environment back then, however, was different from the one we are living in today. The speed that information travels and its real-life impact now is much more acute.

4.3 Challenges Posed by the Modern Information Ecosystem

The networked information ecosystem provides innumerable benefits, most notably giving previously unheard voices a platform and a mechanism to connect (Shirky 2008). However, as has been witnessed over the past few years, this is also leading to a number of serious unintended consequences, particularly in terms of false or misleading information resulting in confusion and dangerous behaviours (Office of the Surgeon General 2021). Over the past few years, it has become increasingly clear that there is no quick-fix.

As we consider the long-term work necessary for understanding and responding to these consequences, we face a number of significant challenges, with three in particular that require consideration within the specific infodemic context: (1) the asynchronous nature of information environments; (2) the difficulties associated with researching these issues due to the complexity of the information environment; and finally (3) the fact that disinformation flows across borders seamlessly, whereas responses are too often organised by nation states.

4.3.1 Asynchronous Nature of Information Environments

The pre-internet design of official communications was top-down, linear, and hierarchical. Limited numbers of news outlets played an inflated role in shaping the way people understood the world. It was designed so that a few trusted messengers – spokespeople, politicians, and news anchors – had the authority to disseminate messages to audiences. While communication theorists in the 1970s (Morley 1974) and 1980s (Hall 1980; Hartley 1987; Katz 1980) challenged the idea that this was a purely passive relationship, emphasising that audiences were active and able to read texts in an oppositional way, the restricted number of outlets, channels, and spaces where people could access information significantly limited the amount of information conveyed and, in almost all cases, guaranteed that only accurate information was being shared.

The advent of the internet transformed this status quo, allowing audiences to become active participants in the creation and dissemination of information. Critically, however, those in official positions today still rely heavily on the traditional model of communication, thinking of the internet as a way to distribute messages more quickly, and to more people, rather than as an opportunity to truly take advantage of the participatory nature of the technology. So, while a news outlet or health authority will use Facebook, Twitter, or Instagram to reach audiences, its use is too often restricted to us simply a ‘broadcast’ mechanism (Dotto et al. 2020).

In contrast, disinformation actors fundamentally understand the mechanics of the internet and the characteristics that make people feel part of something (Starbird et al. 2021). The most effective disinformation actors have understood that community is at the heart of effective communications. Therefore, they have spent time cultivating communities, often by infiltrating existing ones (Dodson et al. 2021), and creating content designed to appeal to people’s emotions (Freelon and Lokot 2020). They also provide opportunities for people to manifest their identification within that community by creating authentic content and messaging. Through that process, they become trusted messengers to recruit and build up the community further. The result is engaging, authentic, dynamic communication spaces, where people feel heard and experience a sense of agency.

Comparing official information environments with communities where disinformation flourishes provides a stark contrast. Official environments are ostensibly more traditional in the sense of being built on facts, science and reason, and rely heavily on text. They are also often structured top-down and rely on people continuing to trust official messengers. The other is built on community, emotion, anecdotes and personal stories, and tends to be far more visual and aural. The characteristics of these spaces align perfectly with the ways in which communities connect offline. They also align closely with the design of social platforms where algorithms privilege emotion and engagement (Schreiner et al. 2021).

Perhaps what is most critical to recognise here is that disinformation actors continue to find vulnerabilities in the traditional information environment. They are also aware that there is less understanding of the dynamics of a networked environment by official messengers who, unfortunately, still prepare as if it was 1992 rather than 2022. For example, disinformation actors will search for statistics or headlines that can be shared without context to tell a worrying or dangerous story, knowing that while it is accurate within the full context, when there is only a visual or headline (which is often all that is shown on social platforms), it will be the misleading content that takes hold (Yin et al. 2018).

Disinformation actors instigate dialogues in order to create opportunities to advance their opinions; for example, they pose a simple question on Facebook, such as asking whether people are concerned about vaccines impacting their fertility, and then utilise the comments by pushing bogus or misleading research that can lead people to reach false conclusions (DiResta 2021). Alternatively, disinformation actors can target journalists by pretending to be trusted sources but push false anecdotes or content in the hope that it will be covered by an outlet with a larger audience than that which they personally have access to (McFarland and Somerville 2020).

4.3.2 Difficulties of Researching the Information Environment

As already stated in Sect. 4.2, the information environment today is incredibly complex. Those studying media effects have continued to struggle with the challenges of measuring audience consumption of different media products (Allen 1981). While there have been ways of measuring television and radio exposure, understanding levels of engagement has always been problematic. For example, if someone has the television news on in the background all day, does it have the same impact as someone sitting down to watch their favourite hour-long soap opera in the evening? More challenging, of course, is an understanding of the intersection between traditional media content and offline conversations with peers. Back in the 1950s, Paul Lazarsfeld and Elihu Katz (Lazarsfeld et al. 1944; Katz and Lazarsfeld 1955) described a two-step flow theory, which incorporated the concept that ideas were rarely transmitted directly to audiences and, instead, people were persuaded when those same ideas were passed through opinion leaders.

The problems emphasised by communication scholars for decades are now complicated further by the intersection between off-line communications and professional broadcast media with online spaces, whether they are websites accessed via search engines, posts on social networks or closed groups on Facebook, or messaging apps such as WhatsApp, Telegram, or WeChat (de Vreese and Neijens 2016).

Globally, people are spending, on average, 170 min online every day, with an additional 145 min on social media (Statista 2021). For the majority, this time is being spent on smartphones rather than desktops. In addition, everyone’s daily diet of online activity and consumption is different. No two people’s search histories, newsfeeds, or chat history look the same. As such, there is no effective method for collecting an accurate picture of what people are consuming, and from where, without which makes measuring the direct impact of messages a seemingly impossible task.

Researchers are doing their best to unpick these dynamics, but they face serious challenges. It is incredibly difficult to access data from social media platforms. The one exception is Twitter, where the platform either releases particular datasets or researchers are able to access the ‘firehose’ of tweets relatively easily (Tornes 2021). As a result, the vast majority of research on misinformation focuses on Twitter. While better than nothing, Twitter is, however, not the most popular platform and is rarely used in many countries (Mejova et al. 2015).

While it is possible to conduct some research with Facebook and Instagram data, it is limited by what is available via Crowdtangle, a tool owned by Facebook. However, this has been documented to have a number of limitations by researchers and journalists attempting to use it to undertake research. YouTube research is also possible, but again not easy. Those who have studied the platform have focused more on the impact of the algorithm on search results.

In many parts of the world, the most popular digital platform is WhatsApp (Statista 2022). However, the encrypted nature of the platform means research is seriously limited and reliant on tiplines or joining groups, both of which have significant limitations in terms of sampling. More importantly, the absence of engagement data means it is impossible to see how many people have viewed a particular post.

Much work has been done in terms of attempting to pressure platforms into sharing data (EDMO 2022). Certainly, there are very significant issues around privacy that have to be addressed. The ability to identify someone via the information they search for or consume is disturbingly easy. As such, platforms have pushed back on ethical grounds with regards to sharing data without the required protections in place. Social Science One,Footnote 1 a project in partnership with Facebook, is one example of a comprehensive and sophisticated attempt at providing necessary protections. However, although the data was shared after a complex de-identification platform was built, problems with the data were revealed in 2021 that undermined the whole exercise (Timberg 2021).

There have also been interesting attempts at citizen science approaches to studying the platforms. For example, ProPublica and The Markup, two US-based non-profit newsrooms, built browser extensions, the ‘Political Ad Collector’ (Merrill 2018) and the Citizen Browser (The Markup 2020), respectively. These browser plugins require user agreement to share the results of the content that appears on platforms via their browsers. It is a potentially promising avenue, but building an acceptance of ‘donating your data’ to science seems to be a long way off.

4.3.3 Cross-Border Disinformation Flows

The networked information environment is borderless. While languages work as something of a preventative measure, diaspora communities encourage the flow of information across borders (Longoria et al. 2021). In a world of visuals, memes, diagrams, and videos (with automated translated closed captions), a rumour can travel from Sao Paulo to Istanbul to Manila in seconds. For example, researchers have been able to track the transnational flow of rumours between Francophone countries (Smith et al. 2020). Genuine information can also travel but without context, or with mistakes in translation, it can turn into a rumour or piece of misleading information just as fast.

Disinformation actors use this situation to their advantage. The anti-vaccine movement, in particular, has been seen to build momentum in one place before taking advantage of personal connections in other countries via closed groups and large accounts. For example, research by First Draft analysed the ways in which anti-vaccine disinformation narratives flowed from the USA to western African countries (Dotto and Cubbon 2021). That such a process was taking place also became clear during the measles outbreak in Samoa in spring 2019. US-based anti-vaccine activists were infiltrating Facebook groups in the island nation to push rumours and falsehoods about the efficacy of vaccines against the disease. This activity was judged to have directly impacted subsequent vaccine uptake (BBC News 2019).

Over the past two years, there has been significant evidence of anti-mask and anti-vaccine activists based in the USA pushing narratives in western Europe and Australia. The conspiracy theory QAnon, which started as a specifically US phenomenon, has also been transported to many locations around the world, with different countries and cultures focusing on the parts of the conspiracy that resonate most strongly. Unfortunately, while disinformation flows across borders, this is much less common in terms of accurate information. Anti-disinformation initiatives such as fact-checking groups or media literacy programs, government regulation, and even funding mechanisms are almost entirely organised around nation-states.

Finally, while platform content moderation is starting to catch problematic content in English, we are aware that it falls short in other languages and cultures (Horwitz 2021; Wong 2021). Other than those headquartered in China, all social media platforms are based in Silicon Valley in the USA. As such, most of the research is being undertaken in the USA, and many of the initiatives are US based and funded by US philanthropists. This disproportionate response around one language, and one country, means the complexity of this truly global, networked problem is being overlooked and misunderstood.

4.4 Conclusion

We need to build an information environment where those relying on disseminating accurate messaging recognise the need to understand the networked dynamic attributes of today’s communication infrastructure. There needs to be new ways of making communication peer-to-peer, engaging, participatory, and where people feel they are being heard and have a part to play. Content needs to be much more visual, engaging and authentic to different communities, rather than designed top-down for mass broadcast and dissemination.

All those working in the information environment, from journalists, to health authority spokespeople, to healthcare practitioners, need to be trained in the mechanics of the modern communication environment so they are prepared for all the mechanisms that are being utilised.

While social media platforms should continue to be pressured to build systems for independent research that protects the privacy of users, there also needs to be more creative mechanisms for building research questions with impacted communities so that consent can be built in from the very beginning. Bringing people into the research process not only allows for more innovative research to take place, but asking people to be involved in the collection and sharing of their data will play an important role in terms of educating people about the ways in which algorithms impact what they see. This should also help kick-start a conversation about the type of information people are seeing on their social media feeds, what they think is appropriate, and what is not.

Disinformation actors generally think globally, either from the start of their campaigns, or by taking advantage once they see that disinformation has taken off and crossed borders. Platforms, too, are globally focused, potentially avoiding individual jurisdictions. Yet our responses to disinformation are too often at the national level and have a disproportionate focus on the USA. The response needs to be as global as the problem.