Abstract
Digital humanism is an ethics for the digital age that interprets and shapes the process of digital transformation in accordance with the core concepts of humanist philosophy and practice. The core idea of humanist philosophy is human authorship, which is closely linked to the practice of attributing responsibility and, therefore, also with the concepts of reason and freedom. Digital humanism has several different implications: From a theoretical point of view, it means rejecting both the mechanistic paradigm (“humans are machines”) and the animistic paradigm (“machines are (like) humans”); from a practical point of view, it especially requires us not to attribute responsibility to AI and not to let AI make ethical decisions.
You have full access to this open access chapter, Download chapter PDF
Similar content being viewed by others
1 Introduction
Digital humanism offers a new ethics for the age of artificial intelligence. It opposes what can somewhat simplistically be called “Silicon Valley ideology.”Footnote 1 This ideology is related to the original American, Puritan hope of salvation, of creating a world of the pure and righteous who have left filth and sin behind; it is, in times of digital transformation, characterized by the dream of a perfectly constructed digital counterparts whose construction excludes any error leading us into a technological utopia. The key concept here is that of artificial intelligence, charged with implicit metaphysics and theology, a self-improving, hyper-rational, increasingly ensouled system whose creator, however, is not God but software engineers who see themselves not merely as part of an industry but of an overarching movement realizing a digital paradise on earth based on transparency, all-connectedness, and non-ambiguity.
Like all technologies of the past, digital technologies are ambivalent. Digital transformation will not automatically humanize our living conditions—it depends on how we use and develop this technology. Digital humanism argues for an instrumental attitude toward digitalization: what can be economically, socially, and culturally beneficial, and where do potential dangers lurk? It considers the process of digital transformation as something to be interpreted and actively shaped by us in accordance with the core concepts of humanism. But what are the core concepts of humanism?
Humanism is understood to mean many different things: from the cultivation of ancient languages to the biblical mandate to mankind to “subdue the earth.”Footnote 2 When we speak of humanism here, it is not in the sense of a historical epoch, such as that of Italian early humanism (Petrarch), German humanism in the fifteenth and sixteenth centuries (Erasmus), and finally New humanism in the nineteenth century (Humboldt). Nor is it a specifically Western or European cultural phenomenon, for humanistic thought and practice exist in other cultures as well. We understand by humanism a certain idea of what actually constitutes being human, combined with a practice that corresponds to this humanistic ideal as much as possible. One does not need an elaborated humanistic philosophy to realize a humanistic practice.
At the heart of humanist philosophy and practice is the idea of human authorship. Human beings are authors of their lives; as such, they bear responsibility and are free. Freedom and responsibility are two mutually dependent aspects of human authorship. Authorship, in turn, is linked to the ability to reason. The criminal law criteria for culpability converge with the lifeworld practice of moral attributions. Persons are morally responsible as authors of their lives, as accountable agents and judges.Footnote 3 This triad of reason, freedom, and responsibility spans a cluster of normative concepts that determines the humanistic understanding of the human condition and, in a protracted cultural process, has shaped both lifeworld morality and the legal order over centuries. This normative conceptuality is grouped around the phenomenon of being affected by reasons.
The core idea of humanist philosophy, human authorship, thus, can be characterized by the way we attribute responsibility to each other and thereby treat each other as rational and free beings. In order to better understand this humanist practice, we will now take a closer look at the conceptual connection between responsibility, freedom, and reason.Footnote 4
2 The Humanist Practice of Attributing Responsibility and the Conceptual Connection Between Responsibility, Freedom, and Reason
The concept of responsibilityFootnote 5 is not a concept to be considered in isolation, but it is closely related to the concepts of freedom and reason and, as we will see, also to the concept of action.Footnote 6 In order to clarify which conditions have to be fulfilled in order to attribute responsibility, these terms shall first be explained in more detail.
There is much to suggest that an action is reasonable/rational if and only if there are, all things considered, good reasons to perform that action;Footnote 7 for sentences like “It is reasonable/rational to perform the action h, but, all things considered, there are good reasons against doing h” or “It is unreasonable/irrational to perform action h, but, all things considered, there are good reasons for doing h,” respectively, are already extremely irritating from a purely linguistic point of view. Reason/rationality can be characterized as the ability to appropriately weigh the reasons that guide our actions, beliefs, and attitudes.Footnote 8 Freedom is then the possibility to follow just the reasons that are found to be better in such a deliberation process; thus, if I am free, it is my reasons determined by deliberation that guide me to judge and act this way or that.Footnote 9
But what does it mean to be a reason for doing something? What are examples of reasons?Footnote 10
If an accident victim is lying on the side of the road, seriously injured and without help, then you have a reason to help her (e.g., by giving first aid or calling an ambulance). Or if Peter promises John that he will help him move next weekend, then Peter has a reason to do so. There may be circumstances that speak against it; but these circumstances, too, are reasons, but just more important reasons, such as the reason that Peters’ mother needs his help on the weekend because she is seriously ill. But having made a promise is—at least as a rule—a reason to act in accordance with the promise.Footnote 11 The two examples clearly show two essential characteristics of reasons. Firstly, reasons are normative; for if there is a reason for an action, then one should perform this action, unless there are more weighty reasons that speak against it.Footnote 12 And secondly, they are objective; by this is meant here that the statement that something is a good reason cannot be translated into statements about mental states. For example, Peter has still the reason to help John with the promised move even if he no longer feels like doing so; and the reason to help the victim of the accident does not disappear either just because one has other preferences or because, for example, one is of the crude conviction that the accident victim does not deserve help. There are just as few “subjective reasons” as there are “subjective facts”!Footnote 13
How is this understanding of reason and freedom relevant for the way we attribute responsibility? Responsibility presupposes both at the level of action and at the level of will or decision at least the freedom to refrain from the action in question and from the decision on which it is based.Footnote 14 The so-called semi-compatibilism disputes this and, in contrast, argues that responsibility is possible even without freedom. This position can be traced back to two essays by the American philosopher Harry G. Frankfurt, published in the late 1960s and early 1970s, which continue to shape the debate today.Footnote 15 The Frankfurt-type examples, which were developed following the scenarios cited by Frankfurt there, are intended to show that a person is morally responsible for her decision even if she had no other option in fact than to decide as she did. In these thought experiments, another person, the experimenter—e.g., a neurosurgeon who can follow and influence the development of the subject’s intentions, which are reflected in corresponding readiness potentials, by means of a special computer device—ensures that the decision can only be made and implemented in the sense of an alternative (to do or not to do) determined by her (the experimenter) in advance. If the subject then decides in favor of this alternative, then she is responsible for this decision, although no other decision alternative was open to her at all because of the other person’s possibility of intervention; since the subject would have decided in exactly the same way in the case of freedom of choice (i.e., without the possibility of intervention from the outside), the lack of possibility to decide differently is irrelevant for the question of responsibility from a semi-compatibilist perspective. This shows, according to this view, that responsibility requires neither freedom of action nor freedom of will. However, this argumentation overlooks the fact that in the scenario just described, we only attribute responsibility to the subject because she has chosen one of two alternatives, both of which were open to her (to do or not to do something), and, thus, had freedom of choice. It is obviously decisive for the question of responsibility at what point the neurosurgeon intervenes: If the intervention only takes place after the subject has made a decision, then she had freedom of choice between two alternatives and is therefore responsible. In contrast, if it takes place at a time when the subject of the test is still in a deliberation process, and, thus, before she has made a decision, then she is not responsible, because the final decision was not made by her but is based on a manipulation by the neurosurgeon.Footnote 16 Thus, the Frankfurt-type examples do not disprove that freedom is a prerequisite for responsibility.
But are our decisions and actions really free? Actions differ from mere behavior in several ways. If, during a bus ride, the passenger P1 loses her balance as a result of emergency braking in such a way that she falls on passenger P2 and the latter is injured as a result, this is described and evaluated differently than if P1 drops on P2 and P2 suffers the same kind of injury. It’s only in the second case that we attribute intentions to P1 and that we would call her role in the incident an action. In the first case, on the other hand, we would say it was an unintentional, involuntary behavior not at all guided by her intentions. Actions, obviously, have besides a purely spatio-temporal behavioral component the characteristic of intentionality.Footnote 17 Another property of actions is that they are reason-guided, i.e., that the acting person always has a reason or reasons for his action;Footnote 18 actions are constituted by reasons, not necessarily by good reasons, but they are performed without any reasons. And it is because of their being constituted by reasons that actions always have an element of rationality, at least in the sense that one can always judge—unlike in the case of mere behavior, where this question does not arise at all—whether an action is rational or not; actions are, one could say, “capable of rationality”; for, as we have seen, an action is rational if and only if, all things considered, good reasons speak for it and irrational if and only if, all things considered, good reasons speak against it. The reasons we are guided by are the result of a (sometimes very short) deliberation process, in which the different reasons are weighed up against each other and which, when it is completed (and only then!), leads to a decision which is then realized by an action. In short, therefore, we can say: “No action without decision.”Footnote 19 The respective decision is necessarily free in the sense that it is conceptually impossible that it is already fixed before the conclusion of the decision process, because it is simply part of the nature of decisions that before the decision was made, there was actually something to decide. A decision whose content is already determined before it is made is just not a decision!Footnote 20
It is due to this ability to weigh up reasons, i.e., the ability to deliberate, that we are rational beings and that we are responsible for what we do.Footnote 21 This is obvious if one realizes that one can be reproached for an action but not for mere behavior: If a damage is caused by a person’s mere behavior, she is not reproached for it, and we are satisfied with a purely causal description (in the above example: “Due to the forces acting on her as a result of full braking forces, P1 fell on P2, causing injury to P2”); if, however, the damage was brought about by an action, we expect an explanation and, if possible, a justification, and that means reasons that justify this action. But one can and must only justify oneself for something for which one can also be held responsible. This leads us to the more general formulation and also to the central statement of the concept of responsibility presented here: To be responsible for something is connected to the fact that I am (can be), in principle, affected by reasons;Footnote 22 this suggests a connection between ascribing responsibility and the ability to be affected by reasons which in turn extends the concept of responsibility beyond the realm of action to that of judgment and emotive attitudes.Footnote 23 The conceptual connection between responsibility, freedom, and reason can be formulated against this background as follows: Because or insofar as we are rational, i.e., we have the capacity for deliberation, we are, by exercising this capacity to deliberate, free, and only because and to the extent that we are free, we can be responsible.
From the finding that our practice of attributing responsibility presupposes a certain understanding of freedom, it does, of course, not yet follow that we actually have this kind of freedom. It should be noted, however, that at least the argument that the assumption of human freedom has been refuted by the theory of physical determinism and a universally valid causal principle is not tenable. The concept of comprehensive causal explanation, according to which everything that happens has a cause and can be described as a cause-effect connection determined by laws of nature, has long been abandoned in modern physics; and even classical Newtonian physics is by no means deterministic because of the singularities occurring in it. This is especially true for modern irreducibly probabilistic physics and even more so for the disciplines of biology and neurophysiology, which deal with even more complex systems.Footnote 24
In the introduction, we characterized digital humanism as an ethics for the digital age that interprets and shapes the process of digital transformation in accordance with the core concepts of humanist philosophy and practice. Having identified these core concepts, we can now consider the theoretical and practical implications of digital humanism.
3 Conclusions
3.1 Theoretical Implications of Digital Humanism
3.1.1 Rejection of Mechanistic Paradigm: Humans Are Not Machines
Perhaps the greatest current challenge to the humanistic view of man is the digitally renewed machine paradigm of man. Man as a machine is an old metaphor whose origins go back to the early modern era. The mechanism and materialism of the rationalist age makes the world appear as clockwork and man as a cog in the wheel. The great watchmaker is then the creator who has ensured that nothing is left to chance and that one cog meshes with another. There is no room for human freedom, responsibility, and reason in this image.
Software systems have two levels of description, that of the hardware, which must fall back only on physical and technical terms, and that of the software, which can be divided again into a syntactic and a semantic one. The description and explanation of software systems in terms of hardware properties is closed: Every operation (event, process, state) can be uniquely described as causally determined by the preceding state of the hardware. In this characterization, posterior uniqueness of hardware states would suffice; Turing then added prior uniqueness to this, so that what is called a “Turing machine” describes a process uniquely determined in both temporal directions. Transferred as a model to humans, this means that the physical-physiological “hardware” generates mental characteristics like an algorithmic system with a temporal sequence of states clearly determined by genetics, epigenetics, and sensory stimuli and thus enables meaningful speech and action. The humanistic conception of man and thus the normative foundations of morality and law would prove to be pure illusion or a collective human self-deception.Footnote 25
In a humanistic worldview, however, a human being is not a mechanism, but a free (autonomous) and responsible agent in interaction with other human beings and a shared social and natural world. For it is undeniable for us humans that we have mental properties, that we have certain mental states, that we have beliefs, desires, intentions, fears, expectations, etc.
3.1.2 Rejection of the Animistic Paradigm: Machines Are Not (Like) Humans
Even in the first wave of digitalization after the Second World War, interestingly enough, it was not the materialistic paradigm just described but the animistic paradigm that proved to be more effective. In 1950, Alan Turing made a contribution to this in his essay “Computing Machinery and Intelligence”Footnote 26 that is still much discussed today. The paradigm we call “animistic” goes, so to speak, the opposite way of interpretation: Instead of interpreting the human mind (mental states) as an epiphenomenon of material processes in a physically closed world, and describing it mechanistically, the algorithmic system is now endowed with mental properties, provided it sufficiently (i.e., confusably) resembles that of humans in its external (output) behavior. One can find this animistic view in an especially radical conception of “strong AI,” according to which there is no categorical difference between computer processes and human thought processes such that software systems have consciousness, make decisions, and pursue goals and their performances are not merely simulations of human abilities but realize them.Footnote 27 From this perspective, “strong AI” is a program of disillusionment: What appears to us to be a characteristically human property is nothing but that which can be realized as a computer program. The concept of “weak AI,” on the other hand, does not deny that there are categorical differences between human and artificial intelligence, but it assumes that in principle all human thinking, perception, and decision-making processes can be simulated by suitable software systems. Thus, the difference between “strong AI” and “weak AI” is the difference between identification and simulation.
If the radical concept of “strong AI” were about to be realized, we should immediately stop its realization! For if this kind of “strong AI” already existed, we would have to radically change our attitude toward artificial intelligence: we would have to treat strong AI machines not as machines but as persons, that is, as beings who have human rights and human dignity. To switch off a strong AI machine would then be as bad as manslaughter.
It is a plausible assumption that computers as technical systems can be described completely in a terminology that has only physical terms (including their technical implementation). There is then no remainder. A computer consists of very complex interconnections in high numbers, and even if it would go beyond all capacities available to humans, it is in principle possible to describe all their interconnections completely in their physical and technical aspects. If we exclude the new product line of quantum computers, classical physics extended by electrostatics and electrodynamics is sufficient to completely describe and explain every event, every procedure, every process, and every state of a computer or a networked software system.
Perhaps the most fundamental argument against physicalism is called the “qualia argument.” This argument speaks against the identity of neurophysiological and mental statesFootnote 28 and, since, as we have just seen, every state of a computer or a networked software system can be completely described in physical terms, also against the identity of digital and mental states. The Australian philosopher Frank Cameron Jackson put forward one version of the qualia argument in his essay “What Mary didn’t know” (1986), in which he describes a thought experiment which can be summarized as follows:
Mary is a scientist, and her specialist subject is color. She knows everything there is to know about it, the wavelengths, the neurological effects, every possible property color can have. But she lives in a black and white room. She was born there and raised there and she can observe the outside world on a black and white monitor. One day, someone opens the door, and Mary walks out. And she sees a blue sky. And at that moment, she learns something that all her studies couldn’t tell her. She learns what it feels like to see color.
Now imagine an AI that not only has, like Mary, all available information about colors but also all available information about the world as well as about people and their feelings. Even if there were an AI that had all this information, it would not mean that it understands what it means to experience the world and to have feelings.
Software systems do not feel, think, and decide; humans on the contrary do, as they are not determined by mechanical processes. Thanks to their capacity for insight as well as their ability to have feelings, they can determine their actions themselves, and they do this by deciding to act in this way and not in another. Humans have reasons for what they do and can, as rational beings, distinguish good from bad reasons. By engaging in theoretical and practical reasoning, we influence our mental states, our thinking, feeling, and acting, thereby exerting a causal effect on the biological and physical world. If the world were to be understood reductionistically, all higher phenomena from biology to psychology to logic and ethics would be determined by physical laws: Human decisions and beliefs would be causally irrelevant in such a world.Footnote 29
3.2 Practical Implications of Digital Humanism
The finding that even complex AI systems cannot be regarded as persons for the foreseeable future gives rise to two interrelated practical demands in particular.
First, we should not attribute responsibility to them. As we have already seen, it is quite plausible that AI systems are not rational and free in the way that is necessary for attributing responsibility to them. The reason why they lack this kind of rationality and freedom is that they lack the relevant autonomy, which consists in the ability of the agent to set her own goals and to direct her actions with regard to these goals. These goals do not simply correspond to desires or inclinations, but are the result of a decision-making process. We can distinguish this concept of Strong Autonomy from the concept of Weak Autonomy,Footnote 30 in which concrete behavior is not determined by the intervention of an external agent, but an external agent determines the overriding goal to be pursued. Since Weak Autonomy does not manifest itself in the choice of self-imposed (overriding) goals, but at best in the choice of the appropriate means by which externally set goals can be achieved, one could also speak of “heteronomous autonomy.” To the extent that an AI has the ability to select the most suitable behavioral alternative for achieving a given goal, this could be interpreted as Weak Autonomy.
The second demand is that ethical decisions must never be made by algorithmically functioning AI systems. For apart from the fact that algorithms do not “decide” anything,Footnote 31 the consequentialistically orientated optimization function inherent in algorithms is not compatible with human dignity and, more generally, with the deontological framework of liberal constitutions.Footnote 32 Furthermore, the approach of considering all relevant facts for each case in advance when programming an algorithm does principally not take into account the complexity and context sensitivity of ethical decision-making situations.Footnote 33 AI systems have no feelings, no moral sense, and no intentions, and they cannot attribute these to other persons. Without these abilities, however, proper moral practice is not possible.
Discussion Questions for Students and Their Teachers
-
1.
How is digital humanism characterized in this chapter?
-
2.
What are the core concepts of humanist philosophy and practice?
-
3.
In what way do actions differ from mere behavior?
-
4.
What conditions must be met for us to hold someone personally responsible for something?
-
5.
What are the main theoretical and practical implications of digital humanism?
Learning Resources for Students
-
1.
Nida-Rümelin, J. and Weidenfeld, N. (2022) Digital Humanism. Cham: Springer International Publishing (https://springerlink.fh-diploma.de/book/10.1007/978-3-031-12482-2).
This book describes the philosophical and cultural aspects of digital humanism and can be understood as its groundwork.
-
2.
Nida-Rümelin, J. (2022), Digital Humanism and the Limits of Artificial Intelligence. Perspectives on Digital Humanism. Cham Springer International Publishing, pp. 71-75. (https://springerlink.fh-diploma.de/book/10.1007/978-3-030-86144-5).
This article presents two important arguments against the animistic paradigm: the “Chinese Room” Argument against the conception of “strong AI” and, based on the meta-mathematical results of incompleteness and undecidability of Kurt Gödel and other logicians, an argument against the concept of “weak AI.”
-
3.
Bertolini, A. (2014), “Robots and Liability – Justifying a Change in Perspective” in Battaglia, F. et al. (ed.), Rethinking Responsibility in Science and Technology, Pisa: Pisa University Press srl, pp. 203–214.
This article presents good arguments against the liability of robots.
-
4.
Nida-Rümelin, J. (2014) “On the Concept of responsibility” in Battaglia, F. et al. (ed.), Rethinking Responsibility in Science and Technology, Pisa: Pisa University Press srl, pp. 13–24.
This article, in the same anthology, focuses on our responsibility for our actions, convictions, and emotions and the reasons we have for all of them. The whole anthology is worth reading!
-
5.
Bringsjord, Selmer and Naveen Sundar Govindarajulu, “Artificial Intelligence”, The Stanford Encyclopedia of Philosophy (Fall 2022 Edition), Edward N. Zalta & Uri Nodelman (eds.), URL = <https://plato.stanford.edu/archives/fall2022/entries/artificial-intelligence/>.
Very instructive article about what AI is as well as about its history and its different philosophical concepts.
Notes
- 1.
- 2.
For an overview of the genesis and the different meanings of the term “humanism,” see chapter of Nida-Rümelim and Winter.
- 3.
Cf. Nida-Rümelin (2011).
- 4.
Although the starting point of our argumentation is the human practice of attributing responsibility—and, thus, the question of which conditions must be met for us to hold other people (or ourselves) responsible for something—our considerations are not based on speciesism. That means we don’t exclude that at some point in the distant future, there may be AI systems that have reason, freedom, and autonomy to the extent necessary for attributing responsibility. But, as we will see, these AI systems would have to be quite different than the machines existing now.
- 5.
The following considerations relate exclusively to personal responsibility. Political responsibility, on the other hand, can be attributed even in the absence of personal misconduct. In order to ensure effective public control, a minister is ultimately responsible for all decisions made by the ministry she heads. This type of accountability is largely based on a fiction, because in view of the large number of individual transactions to be recorded daily within a ministry, a genuine case-by-case review by the minister is practically impossible. For this point in detail, see Nida-Rümelin (2011), pp. 147 ff.
- 6.
Cf. Nida-Rümelin (2011), pp. 19–33 and 53.
- 7.
Nida-Rümelin (2023), pp. 2–4 and p. 173
- 8.
The reasons’ account presented here does not discriminate between “rational” and “reasonable,” or “rationality” and “reason,” and is to be distinguished from a purely instrumental understanding of reason in the sense of “purpose rationality,” according to which an action is rational if and only if it is suitable to achieve the goals pursued by the action. For there are numerous actions that optimally realize the goals of the acting persons, but the best reasons speak against performing these actions, which we therefore call irrational/unreasonable. For example, the crimes committed by the Nazis are no less bad if the preferences of the Nazis have been optimally fulfilled by these deeds; and there can’t be any doubt that the best reasons speak against doing what the Nazis did. The conceivable objection that this argumentation inadmissibly equates rationality with morality, since the deeds of the Nazis were clearly morally wrong, but possibly rational because of their fulfilling of the preferences of their perpetrators, is not convincing. Not only moral but also rational actions ought to be done; immoral and irrational ones ought not to be done (a statement like “Your action is completely irrational” is clearly formulated as a reproach). The ought-character of (un)reasonable/(ir)rational actions speaks against a separation between reason-guided rationality/reason and morality and, therefore, also against a purely instrumental understanding of rationality. Cf. also Nida-Rümelin (2023), pp. 2 ff., 15–22 and 173 ff.
- 9.
Cf. Nida-Rümelin (2023), p. 225. The connection with the reason-guided deliberation process clearly shows that freedom does not merely mean freedom of action here. The latter is already given if the agent is not prevented by external obstacles from doing what he wants and can also exist in the case of compulsive acts of the mentally ill or severely addicted persons, which can clearly be qualified as unfree.
- 10.
Due to limited space, we only focus on practical reasons in the following. However, the characterizations made here can be transferred to theoretical reasons (i.e., reasons for beliefs). Cf. Nida-Rümelin (2023), pp. 179 ff. and 187–190.
- 11.
The only exceptions are promises whose fulfillment is morally questionable or even forbidden (such as the promise to cruelly kill another person). Here again, however, it is a reason that speaks against keeping such promises, and this reason is just their morally questionable or forbidden content.
- 12.
Closely related to the normativity of reasons is their inferentiality, which allows us to deduce from empirical facts normative obligations/normative facts: The empirical fact that a severely injured victim is helplessly lying on the side of the road argues in favor of helping the person (normative fact), because otherwise she will suffer permanent physical damage or even die (inference). For further explanations of the inferentiality of reasons, see Nida-Rümelin (2023), pp. 182 f.
- 13.
Cf. Nida-Rümelin (2023), pp. 187–190. This does not mean that subjective elements such as desires, preferences, or decisions are irrelevant for judging whether a reason is a good reason. And, of course, what is a good reason to do for one person in a particular situation is not necessarily a good reason for another person who is in the same situation and has different preferences. However, it does not follow from the mere fact that a person wishes or decides to do something that she has a good reason to implement the wish or decision. For whether there is a good reason to do so depends on the content of this wish or decision, and the assessment of this content is not made according to subjective criteria.
- 14.
To this point in detail, see Nida-Rümelin (2005), pp. 79 ff.
- 15.
- 16.
On this objection, see in detail Nida-Rümelin (2005), pp. 102 f.
- 17.
When, in everyday life, the term “behavior” is used to describe actions (e.g., in formulations such as “Explain your strange behavior from last night!”), it—correctly—refers to intentional behavior.
- 18.
As a rule, the acting person can also state the reason when asked. Even if the reason(s) should have slipped his mind—e.g., due to a loss of memory as a result of an accident—she had this/these reason(s) at the time of the act.
- 19.
Cf. in detail Nida-Rümelin (2005), pp. 45–60. This deliberative conception of action is accompanied by a rejection of the so-called belief-desire model, which can also be called the standard theory of action motivation. According to this model, it’s only desires that motivate us to act, whereas beliefs play a purely instrumental role, i.e., with regard to the choice of the appropriate means to be used, in order to fulfill the respective desire. The desires are set and given to us (i.e., we just have the desires that we have) or at most based on other more fundamental desires and, therefore, elude any criticism. Apart from its strict orientation to instrumental rationality (cf. the criticism in fn.7 above), the main argument against this model (that is at least inspired by D. Hume) is that it fails to recognize the role that normative beliefs have in the process of action motivation. In particular, the belief-desire model fails to explain why we sometimes do not follow our momentary inclinations in favor of longer-term interests which have not manifested themselves in the form of a desire. On this “argument of intertemporal coordination” and the other objections raised here, cf. Nida-Rümelin (2023), pp. 88–102 and 203 f.; id. (2001), pp. 32–38.
- 20.
Cf. Nida-Rümelin (2005), pp. 49–51.
- 21.
Cf. Nida-Rümelin (2011), p. 53.
- 22.
- 23.
Nida-Rümelin (2023), p. 58; id. 2011, pp. 33–52. The responsibility for our emotive attitudes may perhaps be surprising at first. But they, too, have to be justified sometimes, for it disconcerts us if a person cannot give any understandable reasons for the negative feelings (e.g., hatred) she has toward another person.
- 24.
- 25.
- 26.
Cf. Turing, Alan (1950). Turing there describes an “imitation game” (later known as “Turing test”), in which an interrogator asks questions of another person and a machine in another room in order to determine which of the two is the other person. Turing believed “that in about fifty years’ time,” it would be “possible to programme computers […], to make them play the imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning. […] I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted” (442). Apart from the fact that Turing’s prediction was too optimistic in terms of time, one can question whether this game is really an appropriate method to attribute thinking abilities to machines. For example, one may wonder whether the Turing test does not rather test human credulity than true artificial intelligence.
- 27.
Cf. the characterization of “strong AI” in the Stanford Encyclopedia of Philosophy: “‘Strong’ AI seeks to create artificial persons: machines that have all the mental powers we have, including phenomenal consciousness” (https://plato.stanford.edu/entries/artificial-intelligence/#StroVersWeakAI, section 8.1). For an overview of the use of the terms strong and weak AI in different disciplines, see Nida-Rümelin (2022b).
- 28.
Of course, one can also reject the identity of the mental and the neurophysiological, but still argue that the mental can only occur in connection with the material. Indeed, there is much to suggest that human consciousness is only possible due to the corresponding brain functions. But even those who hold that human consciousness is based essentially on neurophysiological processes need not subscribe to the identity theory of the mental and the physical. That mental states of humans are realized by brain states (i.e., neurophysiological processes and states) does not mean that they are identical to them or caused by them.
- 29.
A theory T2 can be reduced to a theory T1 if T2 can be completely derived from T1, which presupposes that the terms of T2 can also be defined with the help of terms of T1. A weaker form of reducibility exists if all empirical predictions of T2 can already be derived from T1 (empirical reduction). Physicalism is the most prominent form of reductionism, according to which all science can be traced back to physics. So far, this has only been successful for parts of inorganic chemistry and has otherwise remained science fiction. Even the reducibility of biology to physics is highly implausible; the reducibility of the social sciences or even literary studies to physics is completely out of the question. This is due, among other things, to the fact that even in the social sciences, but especially in cultural studies and the humanities, terms such as “meaning,” “intention,” “belief,” or “emotion” occur that cannot be translated into physical terms: Intentions or even reasons are not a possible object of physics.
- 30.
To these concepts and their meaning for attributing responsibility, see Bertolini, A. (2014), p. 150 f. following Gutmann, M./Rathgeber, B./Syed, T., Action and Autonomy: A Hidden Dilemma in Artificial Autonomous Systems, in: Decker, M./Gutmann, M. (ed.), Robo- and Informationethics. Some Fundamentals, Zürich, Berlin 2012, pp. 245 ff.
- 31.
We have already seen in Sect. 2 that a decision is necessarily free in the sense that it is conceptually impossible that it is already fixed before the conclusion of the decision process. But the decision about the rules according to which an algorithm operates is already made and not by the algorithm itself but by the programmer. And even if a complex AI system develops algorithms of its own, then it does so only in order to achieve a goal that it is given to it from outside. There is, so to speak, always an “overarching algorithm” given from outside that guides it.
- 32.
According to consequentialism, the ethical quality of an action (or practice) depends only on the ethical quality of its consequences, and an act (or practice) is right if and only if it brings about the best possible outcomes. From a deontological perspective, on the other hand, the rightness of an action (or practice) depends not (only) on its consequences but on its conformity with a moral norm. One of the most important objections against consequentialist ethics is that, unlike deontological ethics, they cannot adequately justify the obligation not to violate individual rights. See to the objections against consequentialism in detail Nida-Rümelin (1995); see also id. 2023, Chapter 6.
- 33.
Both points are extremely relevant in regard to the question of the ethical and legal permissibility of autonomous driving.
References
Bennett, M., et al. (2007). Neuroscience and philosophy: Brain, mind, and language. Columbia University Express.
Bertolini, A. (2014). Robots and liability – justifying a change in perspective. In F. Battaglia et al. (Eds.), Rethinking responsibility in science and technology (pp. 143–166). Pisa University Press srl.
Bringsjord, S., & Govindarajulu, N. S. (2022). Artificial intelligence. In E. N. Zalta & U. Nodelman (Eds.), The Stanford encyclopedia of philosophy (Fall edn). https://plato.stanford.edu/archives/fall2022/entries/artificial-intelligence/
Daub, A. (2021). What tech calls thinking. An inquiry into the intellectual bedrock of Silicon Valley. Farrar, Straus & Giroux.
Frankfurt, H. G. (1969). Alternate possibilities and moral responsibility. The Journal of Philosophy, 66, 829–839.
Frankfurt, H. G. (1971). Freedom of the will and the concept of a person. The Journal of Philosophy, 68, 5–20.
Jackson, F. M. (1986). What Mary didn’t know. The Journal of Philosophy, 83, 191–295.
Nida-Rümelin, J. (1995). Kritik des Konsequentialismus. Oldenbourg Verlag.
Nida-Rümelin, J. (2001). Strukturelle Rationalität. Reclam.
Nida-Rümelin, J. (2005). Über menschliche Freiheit. Reclam.
Nida-Rümelin, J. (2011). Verantwortung. Reclam.
Nida-Rümelin, J. (2022a). Digital humanism and the limits of artificial intelligence. In H. Werthner et al. (Eds.), Perspectives on digital humanism (pp. 71–75). Springer International Publishing. https://springerlink.fh-diploma.de/book/10.1007/978-3-030-86144-5
Nida-Rümelin, J. (2022b). Über die Verwendung der Begriffe starke & schwache Intelligenz. In K. Chibanguza et al. (Eds.), Künstliche Intelligenz. Recht und Praxis automatisierter und autonomer Systeme (pp. 75–90). Nomos Verlagsgesellschaft.
Nida-Rümelin, J. (2023). A theory of practical reason. Springer.
Nida-Rümelin, J., & Weidenfeld, N. (2022). Digital humanism. Springer International. https://springerlink.fh-diploma.de/book/10.1007/978-3-031- 12482-2
Singer, W. (2002). Der Beobachter im Gehirn. Essays on brain research. Suhrkamp.
Tivnan, T. (1996). The moral imagination: Confronting the ethical issues of our day. Touchstone.
Turing, A. (1950). Computing machinery and intelligence. Mind, LIX(236), 433–460. https://doi.org/10.1093/mind/LIX.236.433
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2024 The Author(s)
About this chapter
Cite this chapter
Nida-Rümelin, J., Staudacher, K. (2024). Philosophical Foundations of Digital Humanism. In: Werthner, H., et al. Introduction to Digital Humanism. Springer, Cham. https://doi.org/10.1007/978-3-031-45304-5_2
Download citation
DOI: https://doi.org/10.1007/978-3-031-45304-5_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-45303-8
Online ISBN: 978-3-031-45304-5
eBook Packages: Computer ScienceComputer Science (R0)