Ethical concerns about technology are nothing new. They go back at least as far as Plato, who worried that the invention of writing would weaken our capacity to remember things on our own. Some of these concerns may have been overblown, while others have proven to be real problems.Footnote 1 The rapid deployment of new technologies in the last few decades has led to a veritable explosion of such worries. Discussion of these issues now involves a dizzying array of topics—everything from the use of biased algorithms in the criminal justice system to the rise of superintelligent AI and killer robots.Footnote 2

Our focus in this book—the negative effects that smartphones have on our autonomy—may seem quite modest by comparison. The issues that we will address are not ones that are depicted in the media as existential threats to humanity, nor are they frequently discussed by policymakers. We are worried about something that most people would never dream of subjecting to moral scrutiny. But, in many cases, the dangers we fail to appreciate are more pernicious than those that make headlines. By the time the water is boiling, it is too late to jump out of the pot.

The technologies that concern us in this book are ones that we have already invited into our lives. They have become such an ingrained part of our daily activities that it is hard to imagine leaving the house without them. Indeed, nomophobia (no mobile phone phobia, i.e., fear of being without a mobile phone) is an increasingly common phenomenon, with researchers estimating that “approximately 100%” of university students have it (Tuco et al. 2023). Many people have a smartphone on their person every minute of the day.Footnote 3 We routinely gaze into our screens and find ourselves compulsively tethered to the endless drip of vibrations, dings, and notifications. According to one estimate, Americans collectively check their phones eight billion times per day.Footnote 4

Yet many of us do not, in fact, want to look at screens as much as we do. Nearly 60% of adults say that they use their phones “too much,” and that sentiment is even more prevalent among Millennials and Zoomers.Footnote 5 Mobile phone use has also been shown to get in the way of our closest relationships.Footnote 6 Over one third of parents find that they use their own phones too much, and over 50% of teens report that their parents or caregivers are often or sometimes distracted by their phones when they are trying to have a conversation with them.Footnote 7

In 2018, a group of economists offered social media users various amounts of money in order to determine how much it would take for them to quit Facebook for just four weeks. The average number ended up around $180, which was higher than most estimates.Footnote 8 The great irony of the study is that the majority of users reported feeling happier without it. They spent more time with family and friends, and they used Facebook much less after the experiment was over. Perhaps most shockingly, they found that those who quit for a month reported increased well-being equal to the jump one would get from earning about $30,000 more in annual income (Allcott et al. 2020, 654). Faced with data of this kind, it starts to look like people are no longer in charge of their own lives when it comes to their phones and their use of social media. People have, as Thoreau once said, “become the tools of their tools” (1991, 32).

It is this relationship with technology that worries us.

More specifically, we will argue that a variety of “smart” technologies have captured our attention in such a way that we have forfeited some of our autonomy to our devices. This leads us to believe that there are compelling moral reasons to restructure our relationship with technology. To better understand the nature of this concern, consider the following cases:

  • Esther. “I wish I could read. I really do. I try to read. I buy books. I open books. And then I black out and I’m on Instagram, and I don’t know what happened” (Povitsky 2020).

  • Monica. “When I hear the Slack ping that announces a new message, I feel a Pavlovian pull to read it, right then, right away. There’s a red circle noting the number of new messages that nudges me to drop whatever I’m doing and click. That’s surely by design. Tristan Harris, a former Google employee turned industry critic, notes that red is a known trigger color. These psychological pulls are not great for my productivity and peace of mind” (Torres 2019).

And finally,

  • Damon. “Every time I log onto Facebook, I brace myself. My newsfeed—like everyone else’s I know—is filled with friends, relatives and acquaintances arguing about COVID-19, masks and Trump. Facebook has become a battleground among partisan ‘echo chambers’” (Centola 2020).

To many of us, these experiences are familiar. All too often we set out to complete a task but are interrupted and subsequently derailed by some social media service we have subscribed to for personal enjoyment or by some communication platform that our employer requires us to monitor. And few, if any of us, are lucky enough to be innocent of the horror of the contemporary newsfeed.

At a glance, it might appear that these cases lack a serious moral dimension or any connection to autonomy. Perhaps distraction is the price one pays to keep up with friends and family. Being interrupted by co-workers is part of having a job. Off-putting comments simply come with the territory of interacting with a large, diverse group of people. Some might say that these situations are little more than old wine in new bottles.

But there is more to these cases. Many of the most popular apps and platforms are meticulously designed behavior modification machines, leveraging behavioral psychology and well-known human vulnerabilities to attract us to them and habituate their use (see, e.g., Eyal 2014). Monica makes a sharp note of this, referencing both Pavlov’s dog and trigger colors.

Thus, when we succumb to certain technological distractions, we are not only failing to do what is in our own self-interest (e.g., read more) but our behavior exemplifies what Kant calls “heteronomy”—we are allowing ourselves to be driven by external forces. Put another way, we are failing to self-govern; we are failing to act autonomously. In the course of this book, we will tap into the rich Kantian tradition of thinking about autonomy and its moral importance to argue that we have moral duties—individually and collectively—to shun technological heteronomy and protect autonomy and its attendant capacities, in ourselves and in each other.

If our arguments succeed, then these are not merely cases of weakness of will, annoying work conditions, or grating media content; rather, they involve genuine moral problems. What is more, we hope to show that they are instances of different, but related, moral failures.

Now, our goal here is not to pick on Esther, Monica’s employer, or Damon’s friends. Neither of us have transcended technological distraction ourselves. Instead, our aim is to make salient the moral obligations that we need to be increasingly mindful of as mobile devices pervade our private lives, our workplaces, and the public sphere. We will make clear in these contexts—and others, such as parenting and education—that we as individuals have obligations to ourselves and to others and we will show how these obligations stem from the value of autonomy. In addition to these individual obligations, we hope to show that various collectives we belong to—such as the body politic—have similar obligations, grounded in our need to act collectively in the face of problems such as the COVID-19 pandemic or climate change.

Our book is organized in such a way that it allows us to track these differences. We develop a set of concepts and moral principles that will help us analyze the key feature of Esther’s case. This concerns the duty we have to ourselves to be “digital minimalists.” We define a digital minimalist as one whose interactions with digital technology are intentional, such that they do not conflict with the agent’s ability to set and pursue her own ends. We ground this obligation in terms of a Kantian duty to oneself. Kant famously argues that we are required to respect rational agency even in our own person. If it is true that our relationship with technology threatens to undermine our capacities as rational agents, then this would mean that we have a moral duty to protect ourselves from this threat.

This helps to explain why our analysis of Monica’s case will be very different, since it involves duties to others. While Monica’s employer has a duty to respect her autonomy—an instance of what we call the duty to be an “attention ecologist” (i.e., one who promotes digital minimalism in others)—that duty is conditioned by Monica’s sovereignty over herself. Finally, Damon’s case will receive separate treatment, since it involves a different sort of autonomy altogether. In his case, he witnesses a breakdown of our collective ability to solve problems that require action at the group level.

We begin in Chap. 2 by developing an account of personal autonomy. The main argument of the book is that we have moral reasons to cultivate our autonomy and to protect it from the threats posed by our unhealthy relationship with mobile devices and the attention economy. So our first task is to explain what autonomy is and why it matters morally. Our view is generally Kantian. But when we use the word “autonomy,” we are referring to what Kant called “humanity” (the rational capacity to set and pursue your own ends). As we will explain later, this means that we are dealing with personal autonomy rather than moral autonomy. We then break autonomy down to two separate components: capacity and authenticity. To do this, we draw on several contemporary accounts of autonomy. We then present Kantian arguments about the moral weight of autonomy, and we explain why other ethical theories are committed to similar claims.

In Chap. 3, we turn to the empirical literature on mobile devices and their deleterious effects. Though the technology is fairly new, psychologists, neuroscientists, and social scientists have already written a great deal on this topic.Footnote 9 Mobile devices and the attention economy have been linked to negative effects on attention, working memory, executive function, sleep, depression, anxiety, and more.Footnote 10 We take the next step of our argument by connecting this empirical research to our discussion of autonomy in Chap. 2. These are the longest chapters of the book, as they provide the foundations for the moral arguments that follow.

Once the groundwork has been laid, we begin defending the existence of various moral duties. Each chapter deals with a different obligation, and the topics are broken down in ways that reflect Kant’s taxonomy of duties in the Metaphysics of Morals. In Chap. 4, we begin, as Kant does, with duties to oneself. We argue that you have a moral duty to be a digital minimalist. We explain what we mean by “digital minimalism,” and we show how this duty fits in within the broader framework of Kantian ethics.

In Chap. 5, we turn to duties to others, and we discuss various instances of these obligations. We argue that the duty to promote the autonomy of others is especially demanding for parents and teachers, who have special obligations to cultivate the autonomy of their children and students. We derive such imperfect duties (duties of love) from the Kantian requirement to respect rational agents as ends in themselves. We also discuss perfect duties (duties of respect) to refrain from using others as a mere means. We explore applications of this duty for employers and software developers in particular.

One thing that unifies all of the above obligations is that they are instances of what Kant calls “duties of virtue.” They concern the moral duties of individuals. We shift this focus in Chap. 6 as we discuss the implications for policymakers, legally enforceable obligations, which Kant refers to as “duties of right.” Although we refrain from making many specific policy recommendations, we outline the kind of Kantian reasoning that could be used to justify such regulations. Finally, we extend duties of digital minimalism to group agents in Chap. 7 as we defend the existence of collective moral obligations. If we were to restrict our focus to the ways that technology can harm us as individuals, we would overlook some morally significant harms to groups qua groups. Thus, in Chap. 7, we argue that addictive technology weakens our capacity to act autonomously as a group. We defend this claim by arguing that certain features of the attention economy (e.g., that it contributes to polarization) threaten to erode the legitimacy of political institutions.

We conclude in Chap. 8 by revisiting the three vignettes from this introduction and showing how the concepts and principles of the book make it possible for us to understand exactly what is going wrong in those cases.