Abstract
Poor self-regulation has been linked to various behaviors that contribute to pressing societal issues, including rising household debt, inefficient use of sustainable resources, and increasing healthcare demands. In light of this observation, the prospect of individuals receiving automated, tailored support by “e-coaching systems” to scaffold and improve their self-regulation is thought to hold promise for making society-wide progress in addressing such issues. Though there may be legitimate reasons for promoting the use of such systems, and individuals might welcome the support, our aim in the present article is to contribute to the ethics of e-coaching by showing how societal pressures towards the widespread adoption of automated e-coaching systems raise concerns in relation to three distinct aspects of social justice. We argue that societal inequalities may be introduced or exacerbated by (1) unequal access to the technologies, (2) unequally distributed restrictions to liberty and subjection to coercion, and (3) the potentially disparate impact of the use of e-coaching technologies on (self-)stigmatizing perceptions of competence. The article offers a research agenda for studying and addressing these concerns.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
The prospect of individuals receiving automated, tailored support to improve their self-regulation is thought to hold promise not just for empowering a select group of technologically inclined “lifehack enthusiasts” but for making society-wide progress in addressing a number of pressing issues. For example, it has been suggested that scaffolding aspects of people’s self-regulation processes may help meet sustainability targets (e.g., by helping people monitor and change their energy expenditures [4, 34]), curb increasing household debt (e.g., by providing financial insights and support with purchasing decisions [35, 64]), and, perhaps most saliently, ease current and future healthcare burden (e.g., by supporting people in their efforts to adopt and maintain healthy lifestyles (cf. [5, 48, 56])). In light of the ongoing developments in the field of Artificial Intelligence (AI), it is increasingly difficult to ignore the possibility of a future in which individuals can be supported in these various domains by interactive, personalized “e-coaching systems”.
Despite their projected individual and societal benefits, emerging e-coaching systems, like many AI-driven technologies, raise various ethical concerns (cf. [4, 20, 78]), including prominent concerns about the risks that large-scale data collection and (hyper)nudging pose to informational and decisional privacy (e.g., [41, 73, 76]). While these concerns do warrant the recognition, we want to draw attention in this paper to a different set of concerns, namely those having to do with social justice. We are not, of course, the only authors to emphasize the need for more discussion of issues around digital technologies in relation to what is owed to people as free and equal members of society. In relation to digital health applications, for example, Paldan, Sauer & Wagner [55] have looked at ways in which self-monitoring applications may lead to health inequalities. Likewise, Brall, Schröder-Bäck and Maeckelberghe [8] have identified potential issues of justice stemming from digital transformations in healthcare, and Figueroa et al. [18] have offered a guide with topics and questions for social justice digital health research. In relation to AI more generally, Buccella [10] has recently argued that access to AI (in its many forms) should be considered necessary for social justice.
Our aim is to contribute to the ethics of e-coaching by identifying how societal pressures towards the widespread adoption of automated e-coaching raise concerns in relation to social justice. In so doing, we foreground normative issues that to date have received insufficient attention in the e-coaching literature. In what follows, we will identify and elaborate three sets of social justice concerns related to e-coaching systems, concerning unequal access to e-coaching technologies, the potential for unequally distributed liberty restrictions, and the potentially disparate impact of the use of e-coaching systems on (self-)stigmatizing perceptions of competence. Before concluding, we will propose a research agenda for studying and addressing these concerns. First, however, we will further specify the kinds of technologies we will be considering.
2 E-coaching system
The term “e-coaching” by itself does not disambiguate between the process of coaching as performed by a human coach through an online platform and coaching by an automated, digital entity. In extension, the term “e-coaching system” can also be understood differently, depending on whether one takes the perspective of computer-mediated communication or human–computer interaction. On the former perspective, any technology used as an intermediate communication medium in a digital coaching practice (e.g., monitoring a coachee’s behavior on Facebook or providing feedback via email) can be considered an e-coaching system. On the latter perspective, however, the term “e-coaching system” will denote something more specific, namely technologies that engage in the actual coaching.Footnote 1
Though there are different perspectives on what it means to engage in coaching, coaching is typically characterized as a collaborative enterprise between coach and coachee in which the coach assists the coachee in the identification and pursuit of personal goals. As Ives puts it, “[t]he primary method is assisting the client to identify and form well-crafted goals and develop an effective action plan” [33, p. 102]. Coaching thus differs from various forms of (mere) decision support, at least to the extent that such decision support is understood in the narrow sense of suggesting, for a given decision, which option is preferable given some metric of efficiency (e.g., the minimization of economic costs). Crucial for e-coaching systems is their ability to engage in an ongoing dialogue with users to aid both planning (identifying means to an end) and the follow-through of one’s plans in pursuit of one’s goals (e.g., by offering support in overcoming intention-behavior gaps).
To further clarify what we mean by e-coaching systems, we adopt the following definition from Kamphorst [36]:
E-Coaching System. An e-coaching system is a set of computerized components that constitutes an artificial entity that can observe, reason about, learn from and predict a user’s behaviors, in context and over time, and that engages proactively in an ongoing collaborative conversation with the user in order to aid planning and promote effective goal striving through the use of persuasive techniques.
Viewing e-coaching systems through the lens of this definition has three key implications. First, it suggests that e-coaching systems should be distinguished from more basic self-regulation support tools such as calendar-driven reminder systems or sensor-based notification apps. For where those kinds of systems, barring malfunctioning, essentially do as they are instructed (e.g., sound an alarm at a certain time or event), e-coaching systems are designed to utilize AI to learn from user input and observed behavior, adapt to preferences, and support individuals at the different stages of self-regulation by (proactively) suggesting potential plans for action and offering persuasive mechanisms to stay on track.
Two examples will help illuminate the difference. First, in a health and lifestyle context, consider the difference between, on the one hand, a scheduling app that allows individuals to program daily reminders for themselves to exercise, take supplements, eat healthily, etc., and, on the other hand, a system that unobtrusively monitors behavior and, through data analysis and predictive modeling techniques, estimates the most opportune moments to engage in a supportive dialogue. Whereas the former system simply restates the users’ own input, the latter may offer various kinds of support, for example by prompting users to reflect on their overall goals in moments of weakness, helping to strike a balance between personal values, or training users to craft more effective, viable plans. Likewise, in an employment context, consider the difference between a calendar app that prompts individuals about their upcoming meetings and lists tasks, and a system that engages in a back-and-forth to help organize and prioritize one’s tasks and meetings, suggests ad hoc breaks when concentration is lagging, and helps create a distraction-free environment for certain periods of time. It is these more advanced types of systems that enhance people’s capacities through their continuous engagement and feedback that we consider e-coaching systems.
A second, further implication of conceptualizing e-coaching systems in this narrower way is that people ought to assess the content they receive from e-coaching systems more critically than content presented by less advanced self-regulation support tools. This is because e-coaching systems form their own “perspective” regarding a user—that is, they create representations of a user’s behavior and preferences that are not directly given by or even approved by the user—and from that perspective derive approaches for tailor-made persuasive interactions. Since individuals are not guaranteed interactions that they have endorsed in the past, they therefore have a responsibility to retain a certain level of vigilance and screen a system’s suggestions, at least superficially, for appropriateness. In this respect, e-coaching systems really do bear a closer resemblance to human coaches than they do to automated reminder systems.
Finally, the adopted definition implies that e-coaching systems work primarily on a psychological level, in dialogue with the coachee. Certainly, it can be imagined that certain systems, in addition to giving advice and feedback, could also control or affect certain bodily functionings more directly, for example using brain implants to directly affect the brain’s dopamine pathways. Such systems would raise different ethical concerns to those we will be taking up here, but as interventions on this level are better likened to doping than to coaching, we take these concerns (and these types of systems) to be outside the scope of this paper.
With the relevant types of systems now in view, let us turn to the subject of social justice and consider how specific aspects of social justice may be affected by the widespread adoption of e-coaching systems.
3 E-coaching systems and social justice
The term social justice is not easily defined [23, 60], but it is typically accepted that the concept concerns normative questions about the fair distribution of wealth, welfare, opportunities, and privileges in society.Footnote 2 So understood, social justice is tightly connected to the negative and positive duties that governments, social institutions, and individuals have in light of established principle of human rights (cf. [59]). In addition, although actual instances of social injustice can also be evaluated in terms of violations of individuals’ rights or the illegitimacy of governance, the discourse of “social justice” is centrally concerned with what we owe each other from the perspective of being free and equal members of society.
The widespread adoption of e-coaching systems potentially affects a wide variety of social justice considerations, given how they require access to certain (costly) technologies and how they have the potential to generate a culture of competitive self-management if the coaching they provide gives users a competitive advantage over those who are not coached or not coached to the same level of excellence. In this section, where we will examine three sets of social justice concerns, we are tacitly limiting our discussion to e-coaching systems that provide a significant benefit to their users.
3.1 Concerns about unequal access to e-coaching technologies
In the literature on e-coaching systems, advocates tend to assume that the introduction of e-coaching systems will make coaching more readily available to all (cf. [74, 79]). Indeed, cheap or even free e-coaching systems could flood the market, offering support to a substantially larger population than is currently the case with human-to-human coaching. However, it is important not to draw the further, faulty conclusion that improved access to coaching will guarantee equal access to all benefits offered by all e-coaching systems. For even if coaching in general becomes more accessible to a larger audience through e-coaching technologies, there will almost always be costs involved (e.g., for supporting hardware such as sensor systems) that those in underprivileged positions may not be able to afford. Moreover, there may be more expensive, advanced models placed on the market that offer additional functionalities and associated benefits that will remain reserved to the more affluent.
The gap between entry-level products and services “for the masses” and more expensive high-end products and services–a gap that can already be observed in relation to hardware products (cf. [26, 44])—raises two distinct sorts of concern. First, the familiar risk that the affordable products and services will be inaccurate or unreliable raises special concerns in the case of e-coaching systems, given how intimately they can be connected to a person’s sense of self (cf. [38]). Second, there is a concern that the more expensive systems will also provide significant relative advantages, further exacerbating inequalities by giving the rich a way of further expanding the socioeconomic advantage that they already have. This is especially worrisome because the enhancements provided by e-coaching systems pertain to factors such as capacities for self-control and for complex decision making that have an enormous influence not only on one’s ability to handle the challenges of life in complex societies but also on the comparative advantage one has in competitive environments. For example, if people with access to high-quality e-coaching systems will be considered more attractive employees in light of their superior self-regulation capacities, they will be more likely to be hired into high-salary positions (cf. [67, 71]).
The inequalities related to the differences in quality of e-coaching systems are often compounded by differences in the quality of the hardware on which these e-coaching systems run, insofar as they require the use of high-end devices with the processing power or battery technology required to run state-of-the-art machine learning models on the local device (cf. [77]). The same point holds for the costs of auxiliary components such as “smart” lighting, (wearable) sensor systems, or “Internet of Things” (IoT) appliances that allow users to take full advantage of all the e-coaching system’s capabilities. In addition, the costs of regularly upgrading software and hardware have the tendency to further widen the gap in the quality of devices available to the affluent and the poor.
Relatedly, people from lower socioeconomic groups or in lower income countries of the Global South may not be able to maintain access as well as others and may experience cycles of what Gonzales has called dependable instability [25]. Broken devices, interrupted connectivity, or expired subscriptions to e-coaching content may all affect the continuity of the e-coaching process.Footnote 3 Importantly, the costs of upkeep and the experience of access limitations may also affect people’s perceptions of the (usefulness of the) technologies themselves (cf. [13, 26]), which may again deepen existing inequalities.
Finally, access to e-coaching may also be hindered by a user’s limited digital skills. To the extent that installing, configuring, and maintaining e-coaching systems requires technological know-how, people lacking such knowledge will be disadvantaged. As research has already shown that socioeconomic status is linked to differences in digital skills in relation to internet use [15, 28, 72, 80], there is a real risk that existing inequalities with respect to digital skills will also hinder uptake and effective use of e-coaching systems.
Clearly, many open questions remain in relation to these concerns, both empirical and ethical. For example, what would be the magnitude of the competitive advantage one could gain? What would be the projected magnitude of the impact on society at large if e-coaching systems indeed had such an effect? What technological or regulatory mitigation strategies could or should be employed? Questions such as these deserve careful consideration and in Sect. 4 of this paper, we invite scholars to address them. For our purposes here, it is sufficient to have established the outline of this first category of concerns, in which we implicitly assumed that individuals would find e-coaching valuable and worthy of pursuit. But what if the use of e-coaching systems was not a choice but something that was imposed? It is to the potential unfairness of liberty restrictions we turn next.
3.2 Concerns about coercion and the unequal distribution of liberty restrictions
In terms of personal experience and the phenomenology of technology, users of e-coaching systems may have concerns about the ways in which these technologies restrict their subjective sense of freedom. In terms of social justice—our focus here—there is a concern with the degree to which the adoption of e-coaching systems is free and voluntary rather than coerced or manipulated. Indeed, as automated e-coaching becomes more effective and beneficial, the pressure to adopt increases. In highlighting these concerns, we can distinguish between mandatory programs and incentive schemes. Each raises important liberty-related concerns of social justice.
In the most straightforward case, the use of e-coaching systems can be mandated, for example, as part of an employment contract or a government benefits program. Such cases wear their compulsory character on their sleeve and thus explicitly call for legitimating endorsement, within the constraints of legal rights. Social justice concerns here relate more generally to the potential overreach by employers or governmental agencies. But one concern related specifically to social justice—where concerns about inequality and coercion intersect—has been insightfully analyzed by Virginia Eubanks [16]. She documents a tendency to test and develop behavioral monitoring technologies in “low rights” environments, in which there is relatively little resistance to the imposition of risky or unethical practices. A similar point holds for “low-rights” environments in which mandatory e-coaching is proposed but where the appearance of consent is illusory (with the further implication that, once these mandates have been established (illegitimately) in low-rights context, it will become easier to push them through elsewhere).
A somewhat more indirect restriction of freedom relates to the use of incentives to motivate the adoption of e-coaching systems, and here a familiar set of concerns arises about the boundary with coercion. One context where e-coaching incentives are regularly employed is in disease prevention and healthy lifestyle programs. In this context, various incentive programs have already been implemented to encourage specific “desirable” behaviors by offering individuals (monetary) rewards (cf. [75]). Several insurance companies in Switzerland, for example, are offering lower insurance rates for individuals who can show that they are promoting healthy habits (e.g., taking yoga classes), and at least five of them are offering monetary incentives to directly share health-related data with the insurer through a smartphone app [45].
Currently, these “opt-in” incentive programs are limited in their scope, in part because insurance companies at present do not have the means to accurately monitor users’ compliance. With the widespread adoption of e-coaching systems, however, and the associated improvements to measurement instruments, infrastructure, and data analysis techniques, this will likely change in the near future [65]. The technologies to extensively monitor people’s behavior and compliance with their agreements with the insurance companies (e.g., not to smoke, or to exercise twice a week) are become increasingly advanced. The aforementioned insurance companies in Switzerland already let their apps connect to other health apps (e.g., Google fit) or fitness trackers (e.g., Fitbit or Garmin smartwatches) to obtain all sorts of health-related data. This development makes it attractive for insurance companies to provide ever more fine-grained options for individuals to limit their insurance costs in exchange for information, and to nudge individuals to take up e-coaching in exchange for significant discounts on their insurance premiums.
The social justice concern here begins by pointing out that the voluntariness of the participation in incentive schemes is diminished to the extent to which the development and adoption of such incentive schemes leads to situations in which (groups of) people, in practice, will be unable to opt out, even if participation is formally considered voluntary (cf. [9, 39]). But not everyone’s voluntariness will be affected to the same degree. As healthcare costs (and insurance premiums) continue to rise, the less affluent individuals in countries with insurance-based healthcare systems may find themselves under increased pressure to choose one of these “restrictive-conditions” insurance policies, simply because they cannot afford to do otherwise.
Many corporate employers, especially larger ones, also have incentives to encourage healthy lifestyles among employees in order to reduce medical costs, absenteeism, and health-related productivity losses. For these purposes, many employers already offer corporate “wellness” programs [47, 52], which increasingly involve the use of wearable self-tracking technologies [12, 42, 68]. If this development continues, e-coaching systems may well become part of the “wellness” package that employees, especially those with limited alternative prospects on the job market, have no real way of avoiding.Footnote 4 Notice that the point here is not that people cannot, strictly speaking, refuse e-coaching, but that, realistically, certain (groups of) individuals will not be able to afford to do so.
A final set of concerns about freedom relates to the capacities for surveillance built into e-coaching technologies. Regardless of the degree of coercion involved in the adoption of automated e-coaching, the fact that these technologies involve extensive monitoring introduces a distinct kind of tension with liberty. For what recent research on so-called “neo-Republican” conceptions of freedom has brought out is that there is a sense in which individuals are less free when they are at the mercy of others, even if those others choose not to exercise that power [22, 57].Footnote 5 As those with increased knowledge of a person’s choices and behavior have increased power and opportunity to influence and intervene that they do not otherwise have (cf. [37]), it may be argued that individuals who have no real option but to employ e-coaching technologies are at the mercy of the e-coaching providers and therefore less free. And this may be especially problematic for those groups of people who can only afford cheap or free e-coaching products, where the collection of data for the purpose of resale is the business model financing the products.
As mentioned, our aim has not been to fully analyze or address these various concerns about liberty here, but rather to call attention to them and place them on the agenda for future discussions, together with a third set of concerns to which we now turn.
3.3 Concerns about stigmatization and its disparate impact on perceptions of competence
The third and final set of concerns pertains to associations between reliance on assistive e-coaching technologies and perceptions of competence. The central thought here is that, depending on individual differences, social norms, and environmental factors, the fact that someone uses these technologies may be construed differently, both by the users themselves and by others. In certain circumstances, reliance on an e-coaching system may be viewed in a positive light, as part of being an empowered individual who cleverly enhances his or her abilities. In other circumstances, the same degree of reliance may be viewed negatively, as indicating defects or an inability to perform adequately without support. And whereas the “power-tool-for-empowerment” construal is likely to lead to the attribution of competence to those employing e-coaching technologies, the “crutch-for-coping-with-deficiency” construal could be a source of stigma (e.g., being ridiculed or looked down upon or discriminated for relying on technology for successful self-regulation) or self-stigma (internalized feelings of embarrassment and shame; cf. [14]).
To an extent, how the use of e-coaching is construed in a given situation may depend on individual differences between users. For example, one plausible implication of research on “independence centrality” [46, 49], is that, with regard to self-attribution of competence, individuals who more highly value being functionally independent will be biased towards feeling a diminished sense of accomplishment for e-coach-supported self-regulation. Likewise, it could well be that people who are low in self-esteem are more likely than others to consider their reliance on e-coaching as confirming evidence of self-perceived deficiencies, especially when others are seen as not needing support.
Insofar as e-coaching technologies evoke such experiences of diminished competence, widespread deployment of these technologies raises concerns about direct setbacks to these people’s well-being, as well as about long-term harm to their agency, as self-efficacy—one’s belief in one’s ability to succeed [7]—has been shown to be pivotal for the initiative and persistence that significantly determine a person’s life-chances (see also [3]). But while these prospects would already offer grounds for caution about the extensive reliance on e-coaching technologies, we want to foreground another distinctive and neglected dimension of social injustice in this context.
The key concern about social justice that we would like to highlight in this connection stems from the possibility that stigmatizing construals of the use e-coaching systems are co-determined by entrenched stereotypes and patterns of prejudice. It is known that structural inequalities and deeply ingrained societal biases often affect how members of marginalized groups are perceived, specifically, that members of high-status groups “tend to be stereotyped as competent, while low-status groups tend to be stereotyped as incompetent” [54, p. 1135] (see also [53]). This “status = competence stereotype,” we contend, plausibly operates as a lens that shapes how the use of e-coaching systems is perceived: the use of e-coaching systems by members of high-status groups will tend to be viewed as enhancing or improving oneself, whereas the use of e-coaching systems by people in low-status groups will be viewed as evidence of needing aid in overcoming structural cognitive, affective, or motivational deficiencies. To the extent to which this hypothesis is confirmed, individuals from low-status groups could turn out to be systematically more vulnerable to stigmatizing construals of their use of e-coaching systems.
Importantly, stereotypes do not only affect how people are perceived by others, but also how people perceive themselves. The phenomenon of “stereotype threat” [50, p. 368, 66] suggests that the anxiety about confirming evidence of negative stereotypes about one’s social group can hamper performance and create self-fulfilling prophecies of failure. Given the stereotypes that associate low competence with membership in low-status groups, these members are at a heightened risk of reduced self-efficacy and performance, stemming from the tendency to perceive their own use of e-coaching as a stigmatizing confirmation of these negative stereotypes about their group’s competence. Moreover, beyond undermining self-efficacy, such construals compound existing inequalities to the extent that people in these positions subsequently do not reap the same benefits from e-coaching systems as other, more affluent individuals might.
In short, the worry is that people will interpret inconclusive evidence regarding abilities and accomplishments through the lens of existing societal prejudices, regularly resulting in biased, stigmatizing interpretations that disproportionately disempowers members of lower status groups. The full force of the potential for social injustice here can be seen conjunction with the concerns raised in Sects. 3.1 and 3.2. Given that privileged individuals will have better access to and more opportunity for the seamless and fluid integration of high-quality e-coaching technologies into their activities, the use of these technologies by social elites is likely to appear more natural or optimizing and hence less like a prominent and stigmatizing “crutch.” Moreover, given the difficulties opting out of e-coaching programs that are mandated by employers or strongly incentivized by insurance companies (see Sect. 3.2), to the extent that individuals from lower socioeconomic groups are practically unable to avoid the use of e-coaching systems, the concern is that they who need and would benefit the most from e-coaching might get a poor reputation for using such systems. Given that people from low-status groups are already vulnerable to (health-related) stigmatization [29], this additional source of stigma would stand to worsen their position in society even further.
We readily acknowledge that these concerns are based on assumptions that need to be confirmed by empirical research. Our point here is to highlight the need for research in this domain and to sound a note of caution, until these potential difficulties can be ruled out. For this reason, it is important to ask hard questions, both at the level of design and at the level of policies regarding widespread adoption of the e-coaching technologies, about the unintended side effects on individual well-being of adopting these technologies on a large scale.
4 An agenda for future research
In the preceding section, we identified concerns regarding three aspects of social justice—concerns that arise with the widespread adoption of e-coaching systems. The issues we discussed within each category show that these technologies risk exacerbating existing inequalities or creating new instances of unfairness, as a consequence of what they cost, how they are funded and marketed, and what kinds of competitive advantages or social disadvantages they introduce. If these concerns about social justice are to be adequately addressed, important conceptual, empirical, and regulatory work remains to be done to ensure that the introduction of e-coaching systems into society happens in a responsible and equitable way. In this final section, we identify four areas where efforts are needed to ensure compliance with the demands of a commitment to social justice: (1) further clarification of distinguishing characteristics of e-coaching systems, (2) elucidation of their disruptive scope, (3) implementation of justice-sensitive principles in the context of the design and implementation of the relevant technologies, and (4) development of approaches to the regulatory and governance challenges arising from the widespread adoption of e-coaching systems in society.
First, there is a need for a better understanding of the distinctive features of e-coaching systems and of related core concepts (cf. [36]). Currently, there are such widely varying (and often imprecise) understandings of the capabilities of e-coaching systems that it is difficult to accurately characterize the ways in which e-coaching systems constitute a “social disruption” [32, 63]. For example, to fully appreciate how deeply automated e-coaching may transform our understanding of accomplishment of action, it is essential to recognize how e-coaching systems—unlike certain other self-regulation support systems and (hyper)nudging technologies—support users in their practical reasoning about what goals to set and how to realize them (see again Sect. 3.3). Likewise, to accurately assess the risk that existing digital divides will exacerbate the (un)equal opportunities for benefiting from automated e-coaching, a realistic grasp is needed of the level of technological skill required to effectively interact with specific e-coaching systems (see Sect. 3.1). Here, we thus see a role for theorists and engineers to work together towards specifying a shared conceptual apparatus and corresponding vocabulary.
Second, more work is needed to detail the projected impact of e-coaching systems on individuals and their cultural, material, and social surroundings [32]. This requires sustained reflection on the extent to which the use of e-coaching technologies challenges “deeply held beliefs, values, social norms, and basic human capacities” [32, p. 6]. Paramount in this regard will be a comprehensive analysis of the interplay between e-coaching systems and human agency. In particular, research is needed into how sustained, 24/7 reliance on e-coaching systems may (i) change how we think about distributed willpower and environment-scaffolded self-regulation efforts (e.g., see [30]), (ii) erode certain self-regulation skills and promote others (cf. [4]), (iii) affect self-understanding and identity (cf. [40]), (iv) undermine or strengthen personal autonomy (cf. [2, 27, 38]), and (v) alter the social norms regarding mutual expectations of self-regulation success ([1], see also again Sect. 3.3). Insight into these areas should help in anticipating more accurately the magnitude, range, and pace of the societal impact of e-coaching systems, particularly regarding social justice. Here, we see a role for philosophers, as well as for economists and sociologists, in carefully mapping which parties and processes may be affected and to which degree, studying the relevant market dynamics that will influence the (un)equal uptake of e-coaching technologies, surveying the various domains that may be disrupted, and establishing an inventory of concepts (such as enhancement) and values (such as liberty) that may be challenged by the widespread adoption of e-coaching systems in society. Mapping, forecasting, and analyzing e-coaching systems’ impact will be critical for making realistic and timely assessments of the risks to social justice and for developing appropriate mitigation strategies (for several promising recommendations in this area, see [61]).
Third, against this background of an improved understanding of the nature and impact of e-coaching systems, practical steps will need to be taken for responsibly developing and implementing e-coaching systems. Alongside guidelines for public policy and social ethics, educational and design strategies must be explored to help mitigate the risks. For example, ethicists and social scientists could work with those involved in marketing these systems to develop revenue models or product placements that avoid compounding disadvantage and exclusion of vulnerable populations. Likewise, with an eye to increasing inclusivity, it will be important to increase awareness among designers and engineers as to the (potential) interplay between their design and implementation choices and the (dis)advantaging effects on society—for example with respect to hardware requirements or the presupposed level of digital skills. In addition, there may be ways of having the e-coaching systems themselves positively contribute to the way in which people experience their technology-supported self-regulation efforts. Recall that e-coaching, properly considered, involves establishing an ongoing, collaborative conversation between coach and coachee. Within this conversation, there are opportunities for tailoring the tone and content of the communication to better relate the coaching to an individual’s intrinsic motivation and identity (what is theorized as “self-concordance,” see [6, 62]). Here, it will be instructive to review and build off the literature on the value sensitive design (VSD) framework, which aims to facilitate the integration of ethical values into the design of new technologies—including those pertaining to artificial intelligence [19, 21, 69, 70].
Fourth, and finally, as the concerns with social justice come more clearly into view, policies and regulations will need to be developed that can guide the responsible introduction of e-coaching technologies into society (cf. [58]). In relation to the broader notion of Artificial Intelligence, several initiatives for regulation have already been put forward, the latest of which is the European Commission’s Artificial Intelligence Act.Footnote 6 This legislation, which aims to present a “balanced and proportionate horizontal regulatory approach to AI [in the EU]” [17, p. 3], posits a number of key regulatory regimes that will be pertinent to automated e-coaching, including prohibitions for manipulative systems (Title II, art. 5(1)) and essential requirements and obligations for providers of “high-risk AI systems” (Title III art. 9–23) such as the mandated implementation and maintenance of a quality management system, a risk management system, and technical documentation. Whether all e-coaching systems will be regarded as “high risk” remains to be seen, but the provisions in art. 7(1) would suggest at least that e-coaching systems operating in the respective spaces of health and employment, where they pose risk “of harm to health and safety, or an adverse impact on fundamental rights,” would be categorized in this way. Part of our aim in the present article is to encourage an understanding of “high risk” that is sufficiently sensitive to social justice concerns. Regulatory and legal efforts should also not be blind to the quasi-coercive character of employers’ incentivization of e-coaching systems for health promotion of their workforce. Finally, insofar as e-coaching systems have disparate stigmatizing effects—be it either for not having access to quality e-coaching systems (Sect. 3.1) or for having one’s self-regulation be supported by e-coaching technologies in the first place (Sect. 3.3)—improvements may be needed in the domain of anti-discrimination law to address these social injustices appropriately and effectively.
5 Conclusion
In this article, we have highlighted distinct social justice concerns that can arise with the widespread adoption of personalized, AI-driven support systems that can give users a competitive advantage in a wide variety of domains by aiding planning and promoting effective goal striving through the use of persuasive techniques. The concerns we identified with these e-coaching systems relate to unequal access to the technologies, the potential for unequally distributed liberty restrictions, and the potentially disparate impact of the use of e-coaching technologies on (self-)stigmatizing perceptions of competence. Each of these concerns, we have argued, can create or exacerbate societal inequalities.
As we have acknowledged throughout, our concerns are based, in part, on assumptions that need to be confirmed by empirical research. Beyond the empirical questions we have identified, we have also outlined four additional areas of research that we believe need to be prioritized in order to mitigate the identified social justice concerns. As will be evident from our discussion in the preceding section, much work remains to be done in each of these four areas. As e-coaching systems are beginning to get a foothold in society, and the technological developments of e-coaching systems are accelerating (including recent developments in the area of natural language processing), our central objective here has been to highlight the importance of addressing these wider, social justice concerns about inequality, coercion, and stigmatization.
Availability of data and material
Not applicable.
Code availability
Not applicable.
Notes
For both kinds of systems, principles of digital coaching ethics are relevant. On this subject, see, for example, Buergi et al. (11).
For a discussion on the plurality of social justice conceptions, see Gewirtz & Cribb [24].
Subscription models for e-coaching may also hinder equality of access in another way, namely if e-coaching in different areas (e.g., lifestyle, finances, social interactions) would require multiple subscriptions (comparable to the current landscape of video content providers). The affluent would be able to afford more comprehensive coaching across life domains.
There are also deep worries here about the ways in which these technologies (aim to) shape employees according to the exploitative assumption that the labor force of workers should be optimized for productivity. For discussion, see, e.g., Moore & Robinson [51].
The relationship between surveillance and freedom is complex and fraught. For, although a vast literature that convincingly documents and analyzes the often subtle (and structural) ways in which the deployment of monitoring technologies diminishes freedom (e.g., [31]), there are still cases in which monitoring is permissible and even obligatory, provided the consent of those being monitored is given genuinely [43] or the restriction on freedom does not constitute domination because it is not an arbitrary threat or imposition of power [57].
See https://artificialintelligenceact.eu/. Other prominent documents in this space include the IEEE treatise “Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems” (https://standards.ieee.org/wp-content/uploads/import/documents/other/ead_v2.pdf) and the “Montréal Declaration for a Responsible Development of Artificial Intelligence” (https://www.montrealdeclaration-responsibleai.com/). All links were last accessed on December 14th 2023.
References
Anderson, J.: Vulnerability, Autonomy Gaps and Social Exclusion. In: Vulnerability, A., Ethics, A. (eds.) Straehle, C, pp. 49–68. Routledge, London (2017)
Anderson, J.: Scaffolding and Autonomy. In: Colburn, B. (ed.) The Routledge Handbook of Autonomy, pp. 158–166. Routledge, London (2022)
Anderson, J., Honneth, A.: Autonomy, vulnerability, recognition, and justice. In Autonomy and the challenges to liberalism: New essays, pp. 127–149. Cambridge University Press, Cambridge, UK (2005)
Anderson, J., Kamphorst, B.: Ethics of e-coaching: Implications of employing pervasive computing to promote healthy and sustainable lifestyles. In 2014 IEEE international conference on pervasive computing and communication workshops (PERCOM WORKSHOPS), pp. 351–356. IEEE (2014)
Angelini, L., El Kamali, M., Mugellini, E., Abou Khaled, O., Röcke, C., Porcelli, S., Mastropietro, A., Rizzo, G., Boqué, N., Del Bas, J.M., Palumbo, F.: The NESTORE e-coach: designing a multi-domain pathway to well-being in older age. Technologies 10(2), 50 (2022)
Bailis, D.S., Ashley Fleming, J., Segall, A.: Self-determination and functional persuasion to encourage physical activity. Psychol. Health 20(6), 691–708 (2005)
Bandura, A.: Self-efficacy: the exercise of control. Freeman, New York (1997)
Brall, C., Schröder-Bäck, P., Maeckelberghe, E.: Ethical aspects of digital health from a justice point of view. Eur. J. Pub. Health 29(Supplement 3), 18–22 (2019)
Brownsword, R.: Law, liberty and technology. In: Brownsword, R., Scotford, E., Yeung, K. (eds.) The Oxford handbook of law, regulation and technology, pp. 41–68. Oxford University Press, Oxford (2017)
Buccella, A.: “AI for all” is a matter of social justice. AI Ethics 3(4), 1143–1152 (2023)
Buergi, M., Ashok, M., & Clutterbuck, D.: Ethics and the digital environment in coaching. In: The Ethical Coaches’ Handbook, pp. 369–381, Routledge, London (2023)
Charitsis, V.: Survival of the (data) fit: Self-surveillance, corporate wellness, and the platformization of healthcare. Surveill. Soc. 17(1/2), 139–144 (2019)
Cinnamon, J.: Data inequalities and why they matter for development. Inf. Technol. Dev. 26(2), 214–233 (2020)
Corrigan, P.: How stigma interferes with mental health care. Am. Psychol. 59(7), 614–625 (2004)
DiMaggio, P., Hargittai, E., Celeste, C., Shafer, S.: Digital inequality: from unequal access to differentiated use. Social inequality, 355–400 (2004)
Eubanks, V.: Automating inequality: how high-tech tools profile, police, and punish the poor. St Martin’s Press, New York (2018)
European Commission. Proposal for a Regulation of the European Parliament and of the Council: Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts (COM/2021/206 final) (2021)
Figueroa, C.A., Murayama, H., Amorim, P.C., White, A., Quiterio, A., Luo, T., Aguilera, A., Smith, A.D.R., Lyles, C.R., Robinson, V., von Vacano, C.: Applying the digital health social justice guide. Front. Digit. Health 4, 807886 (2022)
Flanagan, M., Howe, D. C., Nissenbaum, H.: Embodying values in technology: Theory and practice. In Information Technology and Moral Philosophy, pp. 322–353. Cambridge University Press, Cambridge (2008)
Fossa, F., Sucameli, I.: Gender bias and conversational agents: an ethical perspective on social robotics. Sci. Eng. Ethics 28(3), 23 (2022)
Friedman, B., Kahn, P. H. J., & Borning, A.: Value sensitive design and information systems. In: Zhang, P., Galletta, D. (Eds.), Human-computer interaction in management information systems: Foundations (Vol (5, pp. 348–372). Advances in Management Information Systems). Armonk: M E Sharpe (2006)
Gädeke, D.: Does a mugger dominate? Episodic power and the structural dimension of domination. J Polit Philos 28, 199–221 (2020)
Gewirtz, S.: Conceptualizing social justice in education: mapping the territory. J. Educ. Policy 13(4), 469–484 (1998)
Gewirtz, S., Cribb, A.: Plural conceptions of social justice: implications for policy sociology. J. Educ. Policy 17(5), 499–509 (2002)
Gonzales, A.L.: Health benefits and barriers to cell phone use in low-income urban US neighborhoods: indications of technology maintenance. Mobile Media Commun. 2(3), 233–248 (2014)
Gonzales, A.: The contemporary US digital divide: from initial access to technology maintenance. Inf. Commun. Soc. 19(2), 234–248 (2016)
Haltaufderheide, J., Lucht, A., Strünck, C., Vollmann, J.: Socially assistive devices in healthcare–a systematic review of empirical evidence from an ethical perspective. Sci. Eng. Ethics 29(1), 5 (2023)
Hargittai, E.: Second-level digital divide: differences in people’s online skills. First Monday (2002). https://doi.org/10.5210/fm.v7i4.942
Haverkamp, B., Verweij, M., Stronks, K.: Why socio-economic inequalities in health threaten relational justice. A proposal for an instrumental evaluation. Public Health Ethics 11(3), 311–324 (2018)
Heath, J., Anderson, J. H.: Procrastination and the extended will. In the thief of time: philosophical essays on procrastination, pp. 233–252. Oxford University Press (2010)
Hoeksema, B.: Digital domination and the promise of radical republicanism. Philos. Technol. 36(1), 17 (2023)
Hopster, J.: What are socially disruptive technologies? Technol. Soc. 67, 101750 (2021)
Ives, Y.: What is ‘Coaching’? An exploration of conflicting paradigms. Int. J. Evid. Based Coach. Mentor. 6 (2008)
Jacquet, B., Bourlier, M., Caravona, L., Izquierdo, L.M., Ríos, F.J.J., Jamet, F., Engel, J., Martignon, L., Macchi, L., Baratgin, J.: Your personal chatbot coach to change your CO2 footprint. Proceedings of the 2023 International Conference on Human-Robot Interaction (HRI2023) (2023)
Jung, D., Dorner, V., Glaser, F., Morana, S.: Robo-advisory: digitalization and automation of financial advisory. Bus. Inf. Syst. Eng. 60, 81–86 (2018)
Kamphorst, B.A.: E-coaching systems: What they are and what they aren’t. Pers. Ubiquit. Comput. 21(4), 625–632 (2017)
Kamphorst, B.A., Henschke, A.: Public health measures and the rise of incidental surveillance: Considerations about private informational power and accountability. Ethics Inf. Technol. 25(4), 1–14 (2023)
Kamphorst, B.A., Kalis, A.: Why option generation matters for the design of autonomous e-coaching systems. AI Soc. 30(1), 77–88 (2014)
Kamphorst, B.A., Verweij, M.F., van Zeben, J.A.W.: On the voluntariness of public health apps: a European case study on digital contact tracing. Law Innov. Technol. 15(1), 107–123 (2023)
Kristensen, D.B., Ruckenstein, M.: Co-evolving with self-tracking technologies. New Media Soc. 20(10), 3624–3640 (2018)
Lanzing, M.: “Strongly recommended” revisiting decisional privacy to judge hypernudging in self-tracking technologies. Philos. Technol. 32(3), 549–568 (2019)
Lee, G., Lee, S.H.: Do wearable activity trackers improve employees’ health and increase re-participation in wellness programs? Health Policy Technol. 10(4), 100582 (2021)
Macnish, K.: The Ethics of Surveillance: An Introduction. Routledge, New York (2017)
Marler, W.: Mobile phones and inequality: Findings, trends, and future directions. New Media Soc. 20(9), 3498–3520 (2018)
Martani, A., Shaw, D., Elger, B.S.: Stay fit or get bit-ethical issues in sharing health data with insurers’ apps. Swiss Med. Wkly. (2019). https://doi.org/10.4414/smw.2019.20089
Matire, L.M., Stephens, M.A.P., Druley, J.A., Wojno, W.C.: Negative reactions to received spousal care: predictors and consequences of miscarried support. Health Psychol. 21(2), 167 (2002)
Mattke, S., Liu, H., Caloyeras, J., Huang, C. Y., Van Busum, K. R., Khodyakov, D., & Shier, V.: Workplace wellness programs study. Rand Health Quarterly 3(2) (2013)
McGreevey, J.D., Hanson, C.W., Koppel, R.: Clinical, legal, and ethical aspects of artificial intelligence–assisted conversational agents in health care. JAMA 324(6), 552–553 (2020)
Monin, J.K., Schulz, R., Martire, L.M., Connelly, D., Czaja, S.J.: The personal importance of being independent: associations with changes in disability and depressive symptoms. Rehabil. Psychol. 59(1), 35–41 (2014)
Monypenny, A.: Between vulnerability and resilience: a contextualist picture of protective epistemic character Ttaits. J. Philos. Educ. 55, 358–370 (2021)
Moore, P., Robinson, A.: The quantified self: what counts in the neoliberal workplace. New Media Soc. 18(11), 2774–2792 (2016)
Mujtaba, B.G., Cavico, F.J.: Corporate wellness programs: Implementation challenges in the modern American workplace. Int. J. Health Policy Manag. 1(3), 193 (2013)
Nier, J.A., Bajaj, P., McLean, M.C., Schwartz, E.: Group status, perceptions of agency, and the correspondence bias: attributional processes in the formation of stereotypes about high and low status groups. Group Process. Intergroup Relat. 16(4), 476–487 (2013)
Oldmeadow, J., Fiske, S.T.: System-justifying ideologies moderate status = competence stereotypes: roles for belief in a just world and social dominance orientation. Eur. J. Soc. Psychol. 37(6), 1135–1148 (2007)
Paldan, K., Sauer, H., Wagner, N. F.: Promoting inequality? Self-monitoring applications and the problem of social justice. AI Soc 1–11 (2018)
Parviainen, J., Rantala, J.: Chatbot breakthrough in the 2020s? An ethical reflection on the trend of automated consultations in health care. Med. Health Care Philos. 1–11 (2021)
Pettit, P.: Republicanism: A Theory of Freedom and Government. Clarendon Press, Oxford (1997)
Pflanzer, M., Dubljević, V., Bauer, W. A., Orcutt, D., List, G., Singh, M. P.: Embedding AI in society: Ethics, policy, governance, and impacts. AI Soc 1–5 (2023)
Rawls, J.: A theory of justice. Clarendon Press, Oxford (1972)
Reisch, M.: Defining social justice in a socially unjust world. Fam. Soc. 83(4), 343–354 (2002)
Rubeis, G., Fang, M.L., Sixsmith, A.: Equity in AgeTech for ageing well in technology-driven places: the role of social determinants in designing AI-based assistive technologies. Sci. Eng. Ethics 28(6), 49 (2022)
Sheldon, K.M., Elliot, A.J.: Goal striving, need satisfaction, and longitudinal well-being: the self-concordance model. J. Pers. Soc. Psychol. 76(3), 482 (1999)
Schuelke-Leech, B.A.: A model for understanding the orders of magnitude of disruptive technologies. Technol. Forecast. Soc. Chang. 129, 261–274 (2018)
Sironi, P.: FinTech innovation: from robo-advisors to goal based investing and gamification. John Wiley & Sons (2016)
Spender, A., Bullen, C., Altmann-Richer, L., Cripps, J., Duffy, R., Falkous, C., Farrell, M., Horn, T., Wigzell, J., Yeap, W.: Wearables and the internet of things: Considerations for the life and health insurance industry. British Actuar. J. 24, e22 (2019)
Steele, C.M., Aronson, J.: Stereotype threat and the intellectual test performance of African Americans. J. Pers. Soc. Psychol. 69, 797–811 (1995)
Tiemeijer, W.L.: Self-control: individual differences and what they mean for personal responsibility and public policy. Cambridge University Press, Cambridge (2022)
Torres, E.N., Zhang, T.: The impact of wearable devices on employee wellness programs: a study of hotel industry workers. Int. J. Hosp. Manag. 93, 102769 (2021)
Umbrello, S., Van de Poel, I.: Mapping value sensitive design onto AI for social good principles. AI Ethics 1(3), 283–296 (2021)
Van den Hoven, J.: Moral methodology and information technology. In The handbook of information and computer ethics, 49 (2008)
Van Deursen, A. J., Helsper, E. J.: The third-level digital divide: Who benefits most from being online? In Communication and information technologies annual. Emerald Group Publishing Limited (2015)
Van Deursen, A.J., Van Dijk, J.A.: The first-level digital divide shifts from inequalities in physical access to inequalities in material access. New Media Soc. 21(2), 354–375 (2019)
Van Zeben, J.A.W., Kamphorst, B.A.: Tracking and nudging through smartphone apps: public health and decisional privacy in a European Health Union. Eur. J. Risk Regul. 11(4), 831–840 (2020)
Vemuri, A., Decker, K., Saponaro, M., Dominick, G.: Multi agent architecture for automated health coaching. J. Med. Syst. 45, 1–7 (2021)
Vlaev, I., King, D., Darzi, A., Dolan, P.: Changing health behaviors using financial incentives: a review from behavioral economics. BMC Public Health 19(1), 1–9 (2019)
Yeung, K.: ‘Hypernudge’: Big Data as a mode of regulation by design. Inf. Commun. Soc. 20(1), 118–136 (2017)
Xu, Z., Li, L., Zou, W.: Exploring federated learning on battery-powered devices. In Proceedings of the ACM Turing Celebration Conference-China, pp. 1–6, (2019)
Zarif, A.: The ethical challenges facing the widespread adoption of digital healthcare technology. Heal. Technol. 12(1), 175–179 (2022)
Zhang, C., Van Gorp, P., Derksen, M., Nuijten, R., IJsselsteijn, W.A., Zanutto, A., Melillo, F., Pratola, R.: Promoting occupational health through Gamification and E-coaching: a 5-Month user engagement study. Int. J. Environ. Res. Public Health 18(6), 2823 (2021)
Zillien, N., Hargittai, E.: Digital distinction: Status-specific types of internet usage. Soc. Sci. Q. 90(2), 274–291 (2009)
Acknowledgements
This work is part of the research programme Ethics of Socially Disruptive Technologies, which is funded through the Gravitation programme of the Dutch Ministry of Education, Culture, and Science and the Netherlands Organization for Scientific Research (NWO grant number 024.004.031).
Funding
The authors did not receive any specific funding for this project.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Kamphorst, B.A., Anderson, J.H. E-coaching systems and social justice: ethical concerns about inequality, coercion, and stigmatization. AI Ethics (2024). https://doi.org/10.1007/s43681-024-00424-7
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s43681-024-00424-7