1 Introduction

Our world faces a crisis as yet unperceived by those possessing the power to make great decisions for good or evil. The unleashed power of the atom has changed everything save our modes of thinking, and thus we drift toward unparalleled catastrophe. We scientists who unleashed this immense power have an overwhelming responsibility in this world life-and-death struggle to harness the atom for the benefit of mankind and not for humanity’s destruction.

Albert Einstein, as quoted in Nathan and Norden (1960, Chapter XII)

In these strong words, Albert Einstein formulated the need for taking social responsibility by scientists. Digital science and technology are certainly very different from atomic science, but through the ongoing digital transformation of many sectors of society and government, their societal impacts are no less vast, deep, and fundamental. And in part they are unknown, unclear, or still hidden in the (near) future. In line with the Digital Humanism Manifesto (DIGHUM, 2019, 2023; this Volume, Chapter by Hannes Werthner), this calls for a critical rethinking of the social responsibilities of scientists and technologists in today’s Digital Age.

This chapter undertakes to do so from a historical perspective. We first discuss the science, technology, and social responsibility issues of scientists and technologists as they historically appeared in the Atomic Age—roughly from 1938, with the discovery of nuclear fission and the possibility of the atomic bomb, to the 1980s, when the Cold War gradually came to a (temporary) end. We summarize the discussions on the social responsibility of scientists and engineers that raged at previous times, and we show that many of them still hold relevance today in the Digital Age.

Subsequently, we briefly survey in this chapter the many and diverse societal impacts by the ongoing digital transformation today, using the United Nations’ Sustainable Development Goals (SDGs) as an organizing framework. The upshot is that digital technology development and research has major societal impacts, both good and bad ones, and as a consequence, there undeniably exists a social responsibility of scientists and technologists, if only because we all are also citizens of the globally connected world we have helped create. The discussions on these issues in the Digital Age are not fully new but build on, among others, the Atomic Age discussions. There are some clear historical parallels here, for example, regarding data openness versus data secrecy policies. Scientists and technologists can be a beneficial force for betterment of society, but it does require critical thinking, societal engagement, and actively taking responsibility.

2 On the Social Responsibilities of Scientists in the Atomic Age

Einstein wrote the above in 1946, sometimes called Year 1 of the Atomic Age. The fateful consequences and the global danger of the development of the atomic bomb became increasingly clear and undeniable. The United Nations (UN) were in their formative years, and hopes were placed upon the UN that it could maintain international peace and prevent a future war. Einstein wrote in his capacity as chair of the newly formed Emergency Committee of Atomic Scientists. It is the opening paragraph of a telegram that was widely reprinted in the press, for example, the New York Times of May 25, 1946. Many meetings, publications, and media events followed (see, e.g., Fig. 1), almost on a daily basis, whereby the urgency of the matter was summarized as One World or None!

Fig. 1
An illustration represents the bulletin of the atomic scientists. The list of scientists includes Harold C. Urey, Austin M Brues, Yoshio Nishina, Sylvia Eberhart, and Harrison Brown.

The world-famous “Doomsday Clock” as it first appeared in the Bulletin of the Atomic Scientists in June 1947. Source, including an interesting background article on its design by Martyl Langsdorf, is available at https://thebulletin.org/2013/04/science-art-and-the-legacy-of-martyl/. Used with permission of the Bulletin of the Atomic Scientists

Many scientists were drawn into becoming active in societal issues due to the big human and moral shock that the detonation of the atomic bomb created. That shock was famously expressed, much later in an interview in 1965, by Robert Oppenheimer, the scientific director of the Manhattan Project that produced the first atomic bomb in Los Alamos, New Mexico, USA, by reference to the Bhagavad-Gita: “Now I am become Death, the destroyer of worlds.”

Regarding Einstein, it seems that he personally viewed taking social responsibility as a scientist as something rather self-evident and obvious. But that was not necessarily a mainstream view among his fellow physicists, as many severely struggled with these issues (including Robert Oppenheimer), and many were influenced and even terrorized by the Cold War political pressures that made them highly vulnerable to speak their minds freely.

One prominent figure addressing these issues in speaking and writing was Bertrand Russell, the analytic philosopher in particular of mathematics and formal logic. He was involved in the United Kingdom’s Campaign for Nuclear Disarmament (CND), who is the originator of the ban-the-bomb sign that acquired worldwide fame as the peace symbol (Fig. 2). It is worth quoting Bertrand Russell at length on the issue of the social responsibility of the scientist (Russell, 1960):

Fig. 2
A drawing of an international peace symbol.

The internationally famous peace symbol (CND, UK, 1958). For the symbol and its history, see Campaign for Nuclear Disarmament, https://cnduk.org/the-symbol/.

Science, ever since it first existed, has had important effects in matters that lie outside the purview of pure science. Men of science have differed as to their responsibility for such effects. Some have said that the function of the scientist in society is to supply knowledge, and that he need not concern himself with the use to which this knowledge is put. I do not think that this view is tenable, especially in our age. The scientist is also a citizen; and citizens who have any special skill have a public duty to see, as far as they can, that their skill is utilized in accordance with the public interest.

On affecting public opinion, Russell writes: “Modern democracy and modern methods of publicity have made the problem of affecting public opinion quite different from what it used to be. The knowledge that the public possesses on any important issue is derived from vast and powerful organizations: the press, radio, and, above all, television. The knowledge that governments possess is more limited. They are too busy to search out the facts for themselves, and consequently they know only what their underlings think good for them unless there is such a powerful movement in a different sense that politicians cannot ignore it. Facts which ought to guide the decisions of statesmen—for instance, as to the possible lethal qualities of fallout—do not acquire their due importance if they remain buried in scientific journals. They acquire their due importance only when they become known to so many voters that they affect the course of the elections.”

We recall that these lines were written long ago, in the 1950s, a time of mass breakthrough of television. Now in the Digital Age, it appears there is a new stage, in which technologies such as social media, fake news, deepfakes, and generative artificial intelligence (AI) have exacerbated the problems of providing sound information to the public (DIGHUM, 2023; this Volume, Chapter by Peter Knees and Julia Neidhardt, and Chapter by Ricardo Baeza-Yates).

As a further line of action, Bertrand Russell says that scientists “can suggest and urge in many ways the value of those branches of science of which the important practical uses are beneficial and not harmful. Consider what might be done if the money at present spent on armaments were spent on increasing and distributing the food supply of the world and diminishing the population pressure.”

And Russell ends with: “As the world becomes more technically unified, life in an ivory tower becomes increasingly impossible. (…) We have it in our power to make a good world; and, therefore, with whatever labor and risk, we must make it” (Russell, 1960; the article is the text of an address delivered on September 24, 1959, in London at a meeting of British scientists convened by the Campaign for Nuclear Disarmament, cf. Fig. 2).

Many other noted scientists also spoke and wrote on these matters, for example, the philosopher of science, Karl Popper. In a talk he delivered in 1968 at the International Congress of Philosophy in Vienna, special session on “Science and Ethics,” he proposed to create a form of a modern Hippocratic Oath for scientists. Popper furthermore points out a role for multidisciplinary scientific research regarding societal impacts: “The problem of the unintended consequences of our actions, consequences which are not only unintended but often very difficult to foresee, is the fundamental problem of the social scientist. Since the natural scientist has become inextricably involved in the application of science, he, too, should consider it one of his special responsibilities to foresee as far as possible the unintended consequences of his work and to draw attention, from the very beginning, to those we should strive to avoid” (Popper, 1971).

Einstein relentlessly continued his societal activities until his death. One week before he died (on April 18, 1955), he signed what became known as the Russell-Einstein Manifesto. It was published on July 9, 1955, and it was signed, apart from Einstein and Russell, by several prominent Nobel Prize winning figures such as Frédéric Joliot-Curie, the husband of Irène Curie and son-in-law of Marie Skłodowska-Curie. One of its main points was to call for a congress to be convened by “scientists of the world and the general public” urging “governments of the world (…) to find peaceful means for the settlement of all matters of dispute between them.” Famously, the Manifesto said, “Remember your humanity and forget the rest.”

The Russell-Einstein Manifesto call to action did have effect. It led to a series of international conferences from 1957 onward, known as the Pugwash Conferences. A leading organizing figure here was Joseph Rotblat, a nuclear physicist from Poland. A major result was to bring together leading scholars from many countries so as to discuss ways to temper the arms race. Highly importantly, Pugwash served as one of the very few lines of open communication between the United States, Europe, and the Soviet Union during the Cold War. These Pugwash Conferences turned out to be influential in their impact on policy. They are credited, according to Holcomb B. Noble in the New York Times in an obituary of Joseph Rotblat on September 02, 2005, with laying the groundwork for the Partial Test Ban Treaty of 1963, the Nonproliferation Treaty of 1968, the Anti-Ballistic Missile Treaty of 1972, the Biological Weapons Convention of 1972, and the Chemical Weapons Convention of 1993. Joseph Rotblat and Pugwash received the Nobel Prize for Peace in 1995.

Rotblat’s personal story is both moving and illuminating (Veys, 2013), in particular if we view it from the angle of the social responsibilities of scientists and technologists. A small part of his personal history he revealed in an article many years later in the Bulletin of the Atomic Scientists (Rotblat, 1985). Rotblat was a nuclear physicist working on inelastic scattering of neutrons against heavy nuclei (his PhD subject in 1950, Liverpool, UK), at first in the Radiological Laboratory in Warsaw, Poland, in 1938/1939. Late 1938, in Otto Hahn’s laboratory in Berlin, Germany, experimental phenomena were observed that were (but only later) interpreted as evidence of nuclear fission of uranium, with excess production of neutrons. It was Lise Meitner (previously collaborating with Otto Hahn in Berlin but then fled to Sweden) who with Otto Frisch (from the institute of Niels Bohr, Copenhagen, Denmark) supplied the correct interpretation of the Berlin experiments and other measurements. Thus, they discovered the nuclear fission of uranium as a result of neutron capture as a new nuclear reaction (Meitner & Frisch, 1939).

The potential consequences and societal impacts were immediately and extensively discussed internationally and openly in a variety of scientific journals, including scientists from Germany, France, Sweden, Denmark, Britain, and United States, among others. This open scientific international discussion on the societal impacts of nuclear reaction physics remarkably happened over scarcely a year in 1938–1939 and ended when Hitler invaded Poland. In anachronistic terms, one might say that the discussion on the societal impacts of nuclear reactions went viral. For example, an article by S. Flügge from the Kaiser Wilhelm Institut für Chemie in Berlin was published on June 9, 1939, in German (translated title: Can the energy content of atomic nuclei be made technologically useful?) that already pointed at the possibility of the atomic bomb as well as potential civil uses of nuclear energy (Flügge, 1939). These discussions were picked up by many scientists in many countries including Joseph Rotblat, and of course many scientists (including Albert Einstein) were alerted to the danger of a Hitler atomic bomb. Rotblat writes in reference to Lise Meitner’s discovery (Rotblat, 1985): “From this discovery it was a fairly simple intellectual exercise to envisage a divergent chain reaction with a vast release of energy. The logical sequel was that if this energy were released in a very short time it would result in an explosion of unprecedented power.”

Joseph Rotblat then moved from Poland to the United Kingdom to the group of James Chadwick, the discoverer of the neutron. He subsequently joined the US Manhattan Project in Los Alamos, New Mexico, in order to fight the possibility that Hitler would create an atomic bomb and use it before the United States and allies did. However, it became clear in 1944 that Hitler-Germany dropped the atomic bomb program and would not be able to produce one. That changed the whole picture, opening up the question why the atomic bomb was needed in the first place.

According to Rotblat (1985), at a dinner at James Chadwick’s house (Rotblat’s Manhattan Project boss at the time), General Leslie Groves, who was the US military lead in charge of the atomic bomb Manhattan Project, told him in March 1944: “You realize, of course, that the whole purpose of this project is to subdue our main enemy, the Russians.” This came as a great personal shock to Joseph Rotblat as he describes it, having a commitment that Hitler would not get the atomic bomb first. He therefore decided to leave the Manhattan atomic bomb project, and in fact he was the first and only one to do so. Apart from his personal story and dilemmas, Rotblat’s (1985) article is full of information on how other Los Alamos physicists were looking at this and tried to deal with the ensuing dilemmas. Joseph Rotblat and Pugwash received the Nobel Peace Prize in 1995.

3 Fast Forward to the Digital Age

3.1 General Elements of the Social Responsibility of Scientists and Technologists

Regrettably, the Atomic Age is not just simply history as current geopolitics show, although in the foreground today is the Digital Age. This is a clearly different era in terms of societal issues we have to deal with (DIGHUM, 2019). However, there are some pertinent general insights to be gained if we condense and summarize the vast historical writings on the social responsibilities of scientists and technologists.

First of all, scientists and technologists are citizens—like everyone. Citizenship—community, local, national, regional, global—comes with a moral obligation to strive for the benefit of humankind. There is no way to escape this responsibility—even when some attempt to hide in the ivory tower.

Second, scientists and technologists possess extensive and special knowledge in their field of expertise. This expert knowledge brings with it a different position in the public societal debate, also as seen in the public eye. This different knowledge position brings in many cases a position of some influence and, hence, responsibility—wanted or not. Accordingly, there is a responsibility to share this knowledge with society properly and in fitting ways. There are many different avenues open for scientists and technologists to do so:

  1. (a)

    Education: educate the general public as well as policymakers and politicians on the societal impacts of digital technologies that need to be addressed.

  2. (b)

    Research: investigate and explain to the general public and society at large what will be or might be the (also unintended) consequences or impacts of digital technologies in the (near) future.

  3. (c)

    Application: urge for and, insofar as possible, work on beneficial applications of technology and counteract harmful ones.

  4. (d)

    Policy: from a sound knowledge base, contribute to formulating sound and effective policies regarding the application and governance of advanced technologies.

Admittedly, the challenges of the Digital Age are very different from those of the Atomic Age. It is encouraging however that worldwide, there is considerable activity among scientists, technologists, and writers from many different corners along the abovementioned lines (a)–(d). It is currently scattered, but nevertheless, it is there and should be brought better to daylight (which is obviously also a purpose of the present volume). There is, one might say, already a serious society-oriented digital scholarship-with-citizenship.

3.2 Societal Impacts of Digital Technologies and the Sustainable Development Goals

In order to get a more general picture of the societal impacts and ethical issues associated with today’s digital technologies, let us take the United Nations’ Sustainable Development Goals (SDGs) as a starting point and framework.

The SDGs were adopted by the United Nations in 2015, and they formulate goals for betterment of the world to be achieved by 2030 (United Nations, n.d.; see Fig. 3). These goals cover many areas and issues of society and the planet, and they reflect the widest possible international consensus on values and goals for the benefit of humanity.

Fig. 3
An infographic of 17 U N sustainable development goals. Some of the goals are no poverty, zero hunger, good health and well-being, quality education, gender equality, climate action, life below water, and life on land.

The United Nations’ Sustainable Development Goals (SDGs)

The following brief highlights of the mentioned digital scholarship-with-citizenship just scratch the surface and are incomplete, but they are indicative (for more, see the Learning Resources and References at the end of this chapter).

It is widely agreed that digital technologies bring important (potential) benefits to society. They increase the possibilities and ease for people to communicate with each other, to connect to each other in various ways, and this so at a historically unprecedented global scale and speed. So, digital technologies have the potential to be significant as enablers to important and shared human values and activities. But according to many recent critical studies, they also come with (sometimes unexpected) societal impacts that are harmful to many common people in the world and that are currently not properly managed, controlled, or governed. A few examples from recent literature follow below.

Inequalities and Bias

SDG-5 and SDG-10, respectively, refer to gender equality and reducing inequalities in society. There are now many carefully researched studies that detail how digital technologies such as search engines, algorithms, and AIs embody important biases to the effect that they perpetuate or even enlarge racial and other prejudices and so reinforce racism (Noble, 2018). One of the many examples Safiyah Noble gives is Google’s search engine producing images of black people tagged as “gorillas.” Virginia Eubanks (2017) discusses a long list of cases showing how forms of automated decision-making and algorithmic governance go terribly wrong in ways that make lives of especially already poor and disadvantaged people even more miserable. As an example, the state of Indiana, USA, denied in 3 years 1 million applications for healthcare, food stamps, and financial benefits, because a new computer system interpreted any application mistake as “failure to cooperate.”

A related horror story comes from the Netherlands. Under the pretense of fraud detection and prevention, the Dutch tax authorities for many years implemented a very harsh and disproportional system of payback and big financial penalties of child care allowances upon the slightest suspicion of fraud. The Dutch tax authorities literally chased people using various systems in ways that were downright discriminatory (non-Dutch-sounding family names, a second nationality, etc. labeled as “fraud-risk” factors). They also did what boils down to ethnic profiling and violated privacy rules. Small administrative errors by citizens were taken as evidence for fraud (save the complexity of Dutch legislation already for Dutch residents; it is noted that people with an immigration background often have a limited command of the Dutch language and are therefore more prone to make such “administrative” mistakes). The effects were devastating. Tens of thousands of people were unjustly and without any real basis accused of fraud and subsequently severely harmed. Many ended up in serious financial debt and lost jobs and/or homes, marriages broke up, and mental health problems occurred. An estimated 70,000 children were the victims of this; even families were broken up, and an estimated 1675 or even more children were removed from their homes and taken out of the custody of their parents. As Hadwick and Lan (2021) conclude: “The Dutch childcare allowance scandal or toeslagen affaire unveiled the ways in which the Dutch tax administration made use of artificial intelligence algorithms to discriminate against and violate the rights of welfare recipients.” In June 2020, the Dutch then-PM Mark Rutte was, certainly not wholeheartedly, forced to admit that there is “institutional racism” in the Netherlands. Quite a many politicians prefer to link this to the colonial past only [as it is or seems to be over and done with (?!)] but attempt to avoid looking at the digital present.

Meredith Broussard (2023) makes two important general points here. First, such grave issues are not “glitches,” accidental, regrettable but minor mistakes that can be easily corrected. They are inherent and systemic. Second, there is what she calls “technochauvinism,” the belief that human problems can be solved by technological solutions alone, for example, that computer systems are better, faster, and more neutral in decision-making than humans. This naïve belief in digital systems leads to a tendency of authorities to “outsource,” so to speak, decision-making to digital systems, thus abdicating their own responsibility for decisions. In addition, digital systems also functioned (as in the Dutch case) as stone-walling automated decisions, as they were used like a Chinese wall shielding authorities from citizens to appeal or ask explanations or proof. In other words, digital systems were (and are) used to make authorities immune for citizens contesting authority decisions.

This technochauvinist belief is furthermore used to sell all kinds of technopromises so as to capture billions of taxpayer money (even including space travel and other sci-fi scenarios) but is not based on relevant digital knowledge. There is a role here for socially responsible scientists and technology to investigate and educate a wider public on how digital technologies really work in society.

Decent Work and Economic Growth

The United Nations SDG-8 focuses on decent work and economic growth. Terms such as “digital systems,” “computing,” and “automation” give the impression that it is computers and machines that do the work. However, Mary Gray and Siddharth Suri (2019), Sarah Roberts (2019), and Kate Crawford (2021) all point out that there is an enormous human workforce to keep the digital world running (see also in this volume chapters on Work in a New World). But it is work that is hardly visible, very fragmented, and hardly organized, often working under bad conditions, with poor pay and financial security as contract workers. Roberts (2019) studied the monotonic, repetitive, stressful, and even psychologically harmful work of content moderators (estimated at 100,000 globally), hidden behind the screens of commercial social media, by interviewing many in different countries. Gray and Suri (2019) call this invisible digital work “ghost work” and speak of a growing “global underclass” without labor protection laws and health and other employee benefits and with often below-minimum pay. If we liken the world to a zoo, it is no exaggeration to say that humans are fed in large numbers to the digital platform machines. One wonders about the decency of the big tech platforms to achieve in their own realm SDG-8 goals on decent work.

War and Peace: Data Secrecy Versus Openness

Perhaps surprisingly, an analysis of SDG-16, Peace, Justice, and Strong Institutions unavoidably leads us into the consideration of (big) data management policies. Alex Wellerstein (2021) analyzes in his Restricted Data the data secrecy versus openness debates, arguments, and policies in the Atomic Age in painstaking detail. All in all, scientists were generally in favor of (international) openness, both before and during World War II and during the Cold War. Einstein is one of many examples here, and he wrote many times on it. As another example, Robert Oppenheimer also strongly leaned toward openness, and this made him a political target in the anti-communist “un-American” frenzy in the 1950s (an aspect also portrayed at length in Christopher Nolan’s 2023 film Oppenheimer).

Wellerstein (2021) points out, in his concluding chapter, that data secrecy was and is not just confined to the nuclear domain but was much more encompassing. He also discusses how it extends to the present day and (among others) hereby refers to the current developments in AI. In the Atomic Age, the route was from openness (see above in this chapter, in the years just before World War II) to data secrecy in the Cold War period. We see a parallel here in the Digital Age. For example, the company that calls itself “OpenAI” with its generative AI tools (ChatGPT) has recently gone, in a rather sudden conversion from ideologically open to practically closed, the route from openness to full data-algorithm opaqueness and secrecy, apparently (as suggested by press reports and interviews) for commercial competition as well as (military) government funding reasons.

Concentration, weaponization, and militarization of AI and other digital research seem to be currently going on much more generally, with increasing data and algorithm secrecy, in analogy to the 1950s nuclear secrecy. We see now only snippets of this, but they come fragmentarily but consistently to the surface in news and investigative journalism reports about, for example, AI-supported drones in the Ukraine war. Such developments are certainly not going uncontested, as shown in recent years by the strong Google employee protests against military contracts and by worldwide AI researchers’ petitions against AI autonomous weapons (initiated by well-known Australian AI researcher Toby Walsh). Whatever your personal position is, proper data management policies do come very close to and are often normal part of professional and academic duties of digital researchers in industry, academia, and government. So, in line with Bertrand Russel’s arguments outlined above, there is a clear and present unavoidability of societal responsibility of digital scientists, researchers, and technologists.

Natural Resources, Energy, and Climate Action

SDG-12 is about responsible consumption and production, and SDG-13 is on climate action. In her Atlas of AI, Kate Crawford (2021) argues that “AI is neither artificial nor intelligent. Rather, artificial intelligence is both embodied and material, made from natural resources, fuel, human labor, infrastructures, logistics, histories, and classifications.” She then documents at length what she calls the “planetary costs” of digital technologies, in terms of the mining and extraction of many rare natural resources, such as rare-earth elements, the ensuing environmental damage, and the enormous energy hunger of digital computing.

As Crawford says: “Minerals are the backbone of AI, but its lifeblood is still electrical energy. Advanced computation is rarely considered in terms of carbon footprints, fossil fuels, and pollution; metaphors like ‘the cloud’ imply something floating and delicate within a natural, green industry. (…)” As Tung-Hui Hu writes in A Prehistory of the Cloud, “The cloud is a resource-intensive, extractive technology that converts water and electricity into computational power, leaving a sizable amount of environmental damage that it then displaces from sight. Addressing this energy-intensive infrastructure has become a major concern.” She also refers to studies concluding that running a single large language model produces a carbon footprint equivalent to 125 round-trip flights from New York (Crawford, 2021, p.41–42). With the current developments in AI such as ChatGPT, this can only increase (cf. also Bender et al., 2021). One might summarize this as big data, bigger data, biggest footprint!

In sum, the digital society is not at all limited to the digital realm. It has a very big spillover to “normal” society with which it is more and more integrating. This process has disruptive and ungoverned societal effects that are not, or not necessarily, beneficial to humankind at large. There is a clear societal responsibility of digital scientists and technologists here concerning how we deal with this.

4 Governance, Public Values, and Fairness in Digital Ecosystems

We have seen above, admittedly very sketchy and incomplete, that digital technologies have societal impacts on many different aspects and sectors of society. But there is a further impact that systemically affects society as a whole, and that may be viewed under the rubric of SDG-16, Peace, Justice, and Strong Institutions, and also SDG-9, Industry, Innovation, and Infrastructure.

From the many public discussions, studies, and writings on the digital society, we can see two big society-systemic trends that are a cause for concern. One is more economic, the other more political, but the two are heavily intertwined:

  1. 1.

    The digitalization of society is accompanied by an enormous economic concentration of capital, wealth, and power in the hands of a few (see, e.g., Zuboff (2019) and many more).

  2. 2.

    The digitalization of society leads to a structural transformation of the public and democratic-political sphere (Habermas, 2022; Van Dijck et al., 2018; Vaidhyanathan, 2021; Nemitz & Pfeffer, 2020; and many more) in ways that distort and skew public debate and deliberation and that make equal access to and participation in democratic decision-making in fact more difficult and less equal.

These two developments are not independent, witness alone the fact that the whims of a single billionaire may decide who is to speak or not on public affairs in digital social media or what content is allowed or not. Digital technologies and the associated concentration of big resources in a few hands are “a clear and present danger” to democracy and, more generally, human values and freedoms in a shared public world (DIGHUM, 2023). This constitutes a form of weaponization of digital technologies that has the effect of a cluster bomb tearing a shared public world apart.

So a key question on the table is the democratic governance of technology (Feenberg, 2017; Siddarth et al., 2021; this volume, chapter by George Metakides and chapter by Marc Rotenberg). Jairam et al. (2021) usefully make a distinction between a technology and how it is governed or controlled, pointing out that the technology level and its governance level can have very different characteristics. For example, the big tech platforms rely on network “decentralized” technologies, but the governance level is in contrast strongly centralized and even monopolistic (Wieringa & Gordijn, 2023; this volume, chapter by Allison Stanger).

Innovation in digital technologies is today usually described in terms of digital innovation ecosystems in which a variety of parties and stakeholders participate (for a brief overview, see Akkermans et al., 2022) in both competitive and collaborative relationships. A natural question arising from the metaphor of ecosystems is how such multiparty innovation ecosystems can be kept sustainable, fair, and equitable with respect to all parties involved.

This is a governance question but a very complex one. It is not easy to give a positive definition of what is “fair” or “just.” On the other hand, there are many situations where it is clear that something is unfair or unjust, and a wide consensus exists about that (see, e.g., the SDG-related discussion of digital impacts in Sect. 3.2). Moreover, there are a number of basic ideas, principles, and desiderata regarding the governance of digital ecosystems that are recurring over and over again in the literature, although their formulations and arrangements vary widely (see, e.g., Siddarth et al., 2021; Jairam et al., 2021; this volume, chapter by Julian Nida-Rümelin; chapter by Guglielmo Tamburrini; chapter by Erich Prem; and chapter by Anna Bon); below we follow the paraphrasing by Akkermans et al. (2022).

  1. (a)

    Participation. Fair governance ensures active involvement in the decision-making process of all who are affected and other parties with an interest at stake. It includes all participants interacting through direct or representative democracy. Participants should be able to do so in an unconstrained and truthful manner, and they should be well informed and organized so as to participate fruitfully and constructively.

  1. (b)

    Rule of law and equity. All participants have legitimate opportunities to improve or maintain their well-being. Agreed-upon legal rules and frameworks (this volume, chapter by Matthias Kettemann), with underlying democratic principles, are enforced impartially while guaranteeing the rights of people; no participant is above the rule of law.

  1. (c)

    Effectiveness and efficiency. Fair governance fulfills societal needs by incorporating effectiveness while utilizing the available resources efficiently. Effective governance ensures that the different governance actors meet societal needs. Fully utilizing resources, without being wasted or underutilized, ensures efficient governance.

  1. (d)

    Transparency. Information on matters that affect participants must be freely available and accessible. The decision-making process is performed in a manner that is clear for all by following rules and regulations. Transparency also includes that enough relevant information is provided and presented in easy-to-understand forms or media.

  1. (e)

    Responsiveness. A responsive fair governance structure reacts appropriately and within a reasonable timeframe toward its participants. This responsiveness stimulates participants to take part in the governance process.

  1. (f)

    Consensus-oriented. Fair governance considers the different participants’ viewpoints and interests before decisions are made and implemented. Such governance is defined as consensus-oriented because it aims to achieve a broad community consensus. In order to reach this wide consensus, a firm mediation structure, without any bias toward participants, should be in place.

  1. (g)

    Accountability. Accountability is defined as responsibility or answerability for one’s actions. Decision-makers, whether internal or external, are responsible for those who are affected by their actions or decisions. These decision-makers are morally or legally bound to clarify and be answerable for the implications and selected actions made on behalf of the community.

Such basic ideas and principles provide some of the groundwork for policymaking and design for fairness of digital ecosystems and their societal and democratic governance (see also in this volume chapter by Clara Neppel).

5 Conclusions

Scientists and technologists do have a social responsibility. This derives from two general factors. The first is (global) citizenship. The second is their position of knowledge and expertise in matters of science and technology and the ensuing position and influence in the public debate.

That a social responsibility exists does not tell us much about how it is to be exercised. This is very varied, as the stories regarding both the Atomic Age and Digital Age show. There are many avenues that are possible and important, in what we have called above scholarship-cum-citizenship, including education (also with respect to the general public and policy makers), research (into impacts and also unintended consequences), application (pushing for beneficial applications and counteracting harmful ones), or knowledge-based contributions to policymaking.

Already a very cursory analysis in the framework of the United Nations’ Sustainable Development Goals, as we have carried out in this chapter, reveals that the societal impacts of digital technologies are wide and affect many aspects of society. An important observation here is that many of these are of a society-systemic nature and thus go beyond the individual level of ethics and ethical behavior.

The related social responsibilities are likewise wide and multifaceted and may even seem overwhelming. As an antidote of sorts, there is this call-to-action quote, widely circulating on the Internet and often attributed to Albert Einstein: “The world will not be destroyed by those who do evil, but by those who watch them without doing anything.”

Even if this is true, we do well to keep in mind the wise words of Otto Neurath (1921), a founder of the Vienna Circle. They were written in 1921 and became known as Neurath’s boat:

We are like sailors who on the open sea must reconstruct their ship but are never able to start afresh from the bottom. Where a beam is taken away a new one must at once be put there, and for this the rest of the ship is used as support. In this way, by using the old beams and driftwood, the ship can be shaped entirely anew, but only by gradual reconstruction.

Discussion Questions for Students and Their Teachers

  1. 1.

    “The world will not be destroyed by those who do evil, but by those who watch them without doing anything.” This is a quote widely circulated on the Web and commonly attributed to Albert Einstein. Investigate what the source of this quote is and whether it is correctly attributed to Einstein. Reflect on how “stories based on fact” come into being in the digital world.

  2. 2.

    This quote has the charm of an aphorism, but is it in your view actually valid, or does it make sense? Why (not)?

  3. 3.

    What, in your view, would be the value or usefulness of some sort of modernized Hippocratic Oath (as Popper suggested) for digital scientists and engineers?

  4. 4.

    Are digital technologies value-neutral? See the general discussion of the neutrality issue in Feenberg (2017), and then analyze an example case such as ChatGPT, Uber, Facebook, etc.

  5. 5.

    Can a (legal, law-compliant) business model be unethical? See the digital platform business model studies in Wieringa and Gordijn (2023), take one of the cases, and relate it to the discussion of fairness in innovation ecosystems in the present chapter.

  6. 6.

    Lise Meitner never received the Nobel Prize although she was the internationally recognized key person to discover the nuclear fission of uranium, and this discovery is arguably one of the most important ones in the twentieth century. Investigate the reasons why.

  7. 7.

    Research project. Take one of the United Nations’ Sustainable Development Goals (SDGs) as a thematic focus. Study the societal impacts—positive, negative, mixed—of the digital transformation as related to this SDG, and do so focusing on a specific geographical region (e.g., your own country) or political-legal jurisdiction or societal sector.

Learning Resources for Students

  1. 1.

    From the References below, see in particular (Broussard, 2023; Crawford, 2021; Eubanks, 2017; Feenberg, 2017; Habermas, 2022; Noble, 2018; Siddarth et al., 2021; Vaidhyanathan, 2021; Van Dijck et al., 2018; Wellerstein, 2021; Zuboff, 2019).

  2. 2.

    Grewal, D. S. (2008) Network Power: The Social Dynamics of Globalization. New Haven, CT, USA: Yale University Press.

    A deep interdisciplinary study on how different forms of power emerge from social networks, how this shapes a complex globalization in a—also digitally—globally connected world, and the challenges it poses for a democratic politics

  3. 3.

    Rogers, E. M. (2003) Diffusion of Innovations. 5th edn. New York, NY, USA: Simon and Schuster.

    The classic book about the social network mechanisms that enable, accelerate, or impede the spread of technological innovations through society; many concepts and terms in common usage today in innovation policies (e.g., early adopter) come from this work.

  4. 4.

    Stanley, S. (2015) How Propaganda Works. Princeton, NJ, USA: Princeton University Press.

    A wide-ranging political philosophy work on what propaganda is, how it operates historically as well as today, and how it shapes ideologies that damage liberal democracy and justice.

  5. 5.

    Woolley, S. (2023) Manufacturing Consensus – Understanding Propaganda in the Era of Automation and Anonymity. New Haven, CT, USA: Yale University Press.

    A recent study, based on extensive international ethnographic research, of online and computational propaganda, its deceptive effects, and its manipulative workings, in and through today’s digital social media.