Background

In the last few years, a growing and thriving AI ecosystem has emerged in Africa. Within this ecosystem, there are local tech spaces as well as a number of internationally driven technology hubs and centres established by big tech companies such as Twitter, Google, Facebook, Alibaba Group, Huawei, Amazon and Microsoft have significantly increased the development and deployment of AI systems in Africa. While these tech spaces and hubs are focused on using AI to meet local challenges (e.g. poverty, illiteracy, famine, corruption, environmental disasters, terrorism and health crisis), the ethical, legal and socio-cultural implications of AI in Africa have largely been ignored. To ensure that Africans benefit from the attendant gains of AI, ethical, legal and socio-cultural impacts of AI need to be robustly considered and mitigated.

On the global level, a number of national, regional and international bodies, think-tanks, research institutions and private companies have developed or are in the process of developing ethical principles and guidelines for AI (Jobin et al., 2019; Ulnicane et al., 2021). These emerging principles such as transparency, justice and fairness, non-maleficence, responsibility and privacy that shape global AI ethics discourse are informed by ethical perspectives and traditions from Western Europe, North America and East Asia (Gupta and Heath, 2020). Ethical narratives, perceptions and principles from the Global South, particularly Africa, are glaringly missing from the global discussion of AI ethics. There is a general belief that socio-cultural and political contexts shape expectations of AI and the challenges and risks it poses. It is therefore safe to assume as Hargety and Rubinov (2019) suggested that AI ethics concepts such as ‘bias’, ‘human rights’, ‘privacy’, ‘justice’, ‘solidarity’, ‘trust’, ‘transparency’, ‘openness’ and ‘fairness’ mean different things to different people. The meaning and scope of these concepts emerge from cultural contexts in which they are discussed. Citing the example of Nordic AI policies, Robinson (2020) notes the fundamental influence cultural values have on the way these concepts are conceptualised in national and regional policies. As he pointed out, cultural values contribute to value-laden technology policies in a way that can address societal concerns and interests that are different in different places. This is at the heart of responsible AI—the idea of developing AI systems that will not only be compliant to laws (including human rights provisions) but that are socially/culturally sensitive and acceptable as well as be ethically responsible.

Indeed, embedding cultural values and beliefs into the development and implementation of AI policies and strategies is an imperative for both AI developers and policymakers. People’s contextual understanding of reality must be represented in the design and implementation of the technology to improve acceptability. AI development and use in Africa needs to be sensitive to African cultural values, beliefs and ethical principles which are currently lacking in the global discussion on AI ethics and guidelines. As AI continues to grow in Africa, there is a risk of alienating those for whom these services are meant for, if principles and values from the Global North are imposed on them. Whereas African societies cannot be described as monolithic, African people share common values steeped in rich ethical traditions, described differently yet similar in many communities, that can shape AI development and governance.

To contribute to this discussion, this book presents cutting-edge research and insights on the current challenges and prospects of developing a Responsible AI in Africa from both African and world-leading scholars in AI ethics. The contributions evaluate the importance of contextual values and principles on the development of effective AI and its ethics, governance and strategy for Africa. The book offers a much-needed African AI narrative that is missing in the global AI ethics and governance discourse. It contains original contributions on the current-state-of-the-art, challenges, prospects, the meaning and scope of Responsible AI in Africa. The book succeeds in advancing our understanding of some specific challenges and concerns AI raises in Africa and provides insights on the African ethical foundations that can help in mitigating specific AI concerns in Africa as well as ensure that AI is developed to meet societal hopes, expectations and needs.

AI in Africa

As noted by Schwab (2016), AI is a significant component of the fourth industrial revolution that will lead to fundamental changes in the way we live, work and relate to one another. PricewaterhouseCoopers AI sizing the price report estimates that by 2030, AI technologies could increase the global economy by $15.7 trillion (14%), with increased productivity level of about $6.6 trillion and consumption side-effects of $9.1 trillion (PricewaterhouseCoopers, 2017). This PWC report also showed that although AI is at its early development, the AI market in Europe, North America and China is more advanced than other regions. To put this into proper perspective, the financial gains for the markets in Africa, Oceania and low-income Asian markets are estimated to be around $1.2 trillion while China is about $7.0 trillion, $3.7 trillion for North America and $1.8 trillion for Northern Europe. These figures indicate that in Africa, AI development and deployment are still at its early stages and face a number of challenges towards being a transformative force in society.

However, the nature of AI promises to bring about fundamentally socio-cultural changes in Africa including in areas such as political activities, poverty, environmental sustainability, transportation, agriculture, health care, education, financial transactions and religious and traditional belief systems. Many of these AI systems are no longer described as dreams but are becoming a reality in Africa but mainly driven by companies with roots in the Global North. In addition to the big technology companies establishing operations in Africa, home grown experts are increasingly establishing technology spaces similar to the US Silicon Valley and Silicon Wadi in Israel. Most of these tech spaces are aptly named as ‘Silicon Savannah’ in Kenya, ‘Sheba Valley’ in Ethiopia and ‘Yabacon Valley’ in Nigeria. These tech spaces and many African networks (such as DeepLearning Indaba, Responsible AI network—Africa), local AI start-ups and local stakeholders (including centres of higher education, governments and broader AI community) are fostering a growing ecosystem aimed at developing AI systems that are sensitive to African interests, concerns and culture.

Therefore, AI as a tool or system that performs a specific intelligent task (or otherwise known as artificial narrow intelligence—ANI) is growing and thriving in Africa. However, despite the great benefits these AI systems promise for Africa, there is an appreciation that it is critical to ensure the values and needs of Africa are considered in the design and implementation of these systems. There are also substantial socio-cultural and organisational challenges that undermine the adoption and implementation of AI across the continent. This includes lack of digital infrastructure, education, inadequate data, public policies and funding (Kiemde and Kora, 2021, 2020). Thus, for Africa to begin to capitalise on the opportunities of AI, there needs to be cooperation between African stakeholders as well as the establishment of an enabling environment for AI to thrive. This includes structural reform to support innovation, development of effective policies and regulations for digital growth.

Responsible AI

Responsibility denotes accountability and having control and authority for or over something. It is an important aspect that needs to be taken seriously in any technology design, development, implementation and eventual main-stream buy-in. As such, adoption, adaptation, access and use should also be accounted for in Responsible AI. To this end, there ought to be responsibility in AI, more so when it comes to how it is contextualised and applied in an African context. As such, the question needs to be asked with respect to what Responsible AI means in the Global South, with particular respect to the African context. This is especially in the light of the fact that as is known, AI, like many other technologies, is a western import, designed and developed with mainly western values, yet the technology is expected to be adopted and used in much similar ways as in the Global North. This is despite research showing that AI is not neutral, therefore suggesting that use as well as social and ethical considerations will differ depending on geographical locale, cultural, social and political norms as well as economic standing. Because technologies like AI bring about their own challenges that call for a considerable amount of responsibility, it becomes imperative to understand how Africa is addressing the social and ethical challenges that are brought to light due to the application of AI or the desire to apply AI. To this end, it becomes necessary to understand what Responsible AI means for Africa and how is this considered within the context of ethical challenges that result from its potential adoption and use or is Responsible AI considered at all? In the first instance, Wakunuma et al., (2021) call for the reconceptualisation of the notion of responsible innovation which covers terms like Responsible AI because they have been developed in the Global North with little reference to what these may mean in the Global South. The authors argue that RI should take into account diverse RI practices that may be dependent on community initiatives as well as indigenous knowledge and cultural values. In much similar terms, Carman and Rosman (2021) posit that AI and therefore its ethical considerations should be compatible with the societal values within which they operate from. This is a clear recognition of the fact that as societal value systems differ, so too will the ethical concerns and solutions thereof that may pertain to different societies.

However, solutions to ethical concerns will be challenging to come by if as Gwagwa et al. (2021) have noted that Africa still scores very lowly in its AI readiness and will therefore need to depend on continued support from international partners and technology firms. This dependency does not foster confidence in finding ethical solutions when technology accessibility and implementation are premised on support from others. The dependency can lead to challenges in developing appropriate policies that should speak to Responsible African AI rather than risk the embedding of values developed by the very international technology firms that Africa depends on. Perhaps that is why there are very few African countries with AI policies that robustly address ethical concerns. Further, the fact that there are few African AI experts on the international AI stage also shows a shortage of AI skills—a lack of diversity among those with skills and a lack of financial resources which are much needed to accelerate the development of AI which does not help with a robust understanding and awareness of ethical concerns of AI that can enable responsible AI in Africa.

Global AI Ethics: African Perspectives

In their comprehensive and robust review of 84 guidelines on Ethical AI published around the world, Jobin et al., (2019) identified 11 overarching ethical principles: transparency, justice and fairness, non-maleficence, responsibility, privacy, beneficence, freedom and autonomy, trust, sustainability, dignity and solidarity. However, the authors admitted that a further thematic analysis revealed ‘‘significant semantic and conceptual divergences in both how the eleven ethical principles are interpreted and the specific recommendations or areas of concern derived from each’’ (Ibid, p. 7). It is important to note that none of the guidelines reviewed was developed in or for African contexts. This points to the fact that the global AI ethics debate is being shaped without Africa in mind. The underlying moral traditions behind the ethical principles shaping this debate therefore emerge from non-African contexts, while the AI applications and tools will potentially be used in African contexts. Since Africa does not lack moral traditions or ethical principles worthy of being considered in the global AI ethics debate, the continued global discourse on Responsible or Ethical AI without perspectives from Africa amounts to epistemic injustice. Epistemic injustice is a concept that defines unfairly discriminating against one’s capacity as a knower (Byskov, 2021). Africa has well-established philosophical and cultural traditions that can provide unique perspectives on identified ethical principles for the design, development and application of AI.

While the rationale behind the lack of African perspectives in the global AI ethics debate is not the focus here, this book serves as a counter to this epistemic injustice and makes a case for why and how African voices, ethics, interests, visions, concerns, expectations and fears should become part of the increasing global discussion on Responsible AI. For AI to be sensitive to African socio-cultural contexts, it is important to consider African perspectives. AI systems are designed to solve problems within contexts. The values, interests and moral traditions of these contexts need to be factored into the design and deployment of any AI technology. Therefore, global Responsible AI frameworks that can make AI align with diverse societal needs, concerns and interests are needed. African perspectives are and should be critical components of these frameworks. There are two major implications of this. First, it will mean the development of AI applications that respond to African needs, expectations, interests, values and beliefs. Second, it will contribute to epistemic justice in the global AI ethics discourse.

Structure of Volume

In eight main chapters, this book explores the concept of Responsible AI in Africa. In this introductory chapter, the editors introduce the concept of Responsible AI in the African context. It highlights the lack of Africa contexts, voices, interests and values in the global discussion of AI ethics and calls for the reconsideration of the Responsible AI landscape. Following this, Ruttkamp-Bloem starts the book off by making a case for actionable AI ethics in Africa that is driven by dynamic and epistemic just ethical systems. She focuses on the AI ethics policy environment in Africa and concludes that the fast-changing nature of AI technology requires a dynamic AI ethics policy ecosystem characterised by engagement with diverse stakeholders from different backgrounds, interests and values. This contribution highlights the importance of considering African contexts and values (particularly the communitarian concepts of personhood and interconnectedness within a community) in developing actionable AI ethics that ensures trust, social acceptability and cultural sensitivity. It suggests that culture should be the global calculus for AI ethics with regard to being the source for AI ethics and translating into more relatable contexts for communities.

Responsible AI is, however, necessitated by identifiable challenges to AI design and implementation. These challenges differ in different contexts and disciplines. In the third chapter, Okolo et al. highlight the challenges and opportunities AI presents to Africa. Their contribution provides detailed information on principles of Responsible AI and empirically sound evidence of the landscape of AI in Africa. This chapter also raises concern regarding the increasingly aggressive presence of big tech companies in Africa, particularly Chinese companies, which touches on the power imbalance in the AI ecosystem between the Global North and the Global South. It also provides recommendations for improving responsible AI in Africa.

Chapter four focuses on identification of specific ethical perspectives around the possible deployment of an AI system in Kenya, Africa. Kwanya perspectives on ethical concerns related to possible integration of co-bots in workplaces. This chapter explored the perceptions of data scientists in Kenya on ethical issues that can affect the acceptability of co-bots in workplaces. Kwanya’s contribution highlights specific socio-cultural concerns, fears and expectations of AI in African societal contexts albeit in Kenya.

In chapter five, Abejide Ade-Ibijola and Chinedu Okonkwo unpack the emerging challenges facing the design and adoption of AI in Africa. The design and adoption of AI face a number of challenges in Africa due to our unique political and socio-cultural contexts. Their contribution not only highlights challenges to widespread design and adoption of AI in Africa such as lack of structured data ecosystem, skills acquisition, relevant policies and ethics, insufficient infrastructure, it provides recommendations for addressing these challenges.

Chapter six touches on the issues of AI and gender. Borokini et al. present critical perspectives on the use of gendered chatbots in commercial banks in Nigeria. Through an analysis of identifiable features of chatbots used by financial institutions in Nigeria, the chapter shows that the majority of available conversational agents are gendered to appear female. The anthropomorphic project of human features/characteristics on these AI applications reinforce gender stereotypes with critical implications for human behaviour. The authors also point out that the increasing use of chatbots raises crucial concerns not only related to gender equality and possible biases against women, but also to the future of work in a society where the unemployment rate is high. Most importantly, this chapter provides recommendations for AI designers on how chatbots can be designed in a way to subvert stereotypes and for policymakers on how to develop policies for gender-inclusive AI designs in Nigeria in particular and African in general.

In Chapter seven, Stahl et al. present AI policy as a response to the need for AI Ethics in Africa. Their contribution is based on the analysis of AI strategies and initiatives available in North Africa. The authors explored how ethical issues are framed and addressed in North African AI strategies. This chapter also highlights the gaps and opportunities in connection with the current AI strategy landscape and suggests how AI policies in Africa should address ethical issues in line with African socio-cultural values.

Further to this, Eke et al., provide critical perspectives on how the future of Responsible AI in Africa can be shaped. This chapter previews the current and future landscape of AI design and deployment in Africa and highlights the unfair neglect of African socio-cultural contexts in the global AI ethics discourse despite the concerns AI raises in African contexts. To achieve a globally just, fair and transparent AI therefore, the authors identified the need to integrate African contexts, interests, values, fears, hopes, expectations and aspirations into AI. They went further to map out what Africans need to do to achieve Responsible AI within and outside of Africa.

Finally, Virginia Dignum discusses the rationale, the scope, nature and limitations of current global efforts in Responsible AI and governance. The chapter begins with a reasoned conceptual clarification and demystification of AI following the well-documented hype that surrounds it. It provides insights into the lessons Responsible AI in Africa can learn from various national and regional AI governance initiatives. Most importantly, Dignum suggests that Responsible AI can benefit from social perspectives embedded in African philosophies such as Ubuntu philosophy. In her words, these social perspectives can ‘‘complement the currently predominant individualistic view of AI systems, to one that acknowledges and incorporates the collective, societal’’.

Conclusion

Overall, this book highlights the need for increased discussion and application of African contextual narratives, especially ethics, into AI. It serves as a call for local and international AI stakeholders and professionals to be aware of African narratives and to consider them in the design, development and deployment of the AI applications in Africa. African moral traditions can inform decision-making in AI applications (especially the ones deployed in Africa). To achieve fair, just and transparent AI, it is high time these moral traditions and contexts were considered in the global AI ethics discourse. Ethical AI cannot be achieved in Africa and around the world without consideration of African socio-cultural and ethical contexts or the inclusion of African voices into the global Responsible AI discourse. Failing to consider African contexts and narratives will only lead to ineffective or misguided applications that will neither benefit African societies nor promote human flourishing in Africa. This book, therefore, introduces responsible AI in Africa discourse in a way that will facilitate culturally sensitive and inclusive AI systems that can improve rather than worsening African societal situations.