Abstract
Franke Philosophy & Technology, 35(4), 1-7, (2022) offers an interesting claim that algorithmic transparency as manipulation does not necessarily follow that it is good or bad. Different people can have good reasons to adopt different evaluative attitudes towards this manipulation. Despite agreeing with some of his observations, this short reply will examine three crucial misconceptions in his arguments. In doing so, it defends why we are morally obliged to care about the manipulative potential of algorithmic transparency. It suggests that we as society have a moral duty to incorporate the value of transparency into algorithmic systems while keeping algorithmic transparency itself sensitive to power relations.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
I thank Ulrik Franke, (2022) for his thoughtful comments on my paper. His piece agrees that algorithmic transparency can be utilized to manipulate people’s behavior. But Franke poses a further interesting question: how much should we care about this manipulative potential of algorithmic transparency? He suggests that people may have good reasons not to worry much about this manipulation.
Franke starts with an interpretation of my paper as “a Foucauldian power account of algorithmic transparency,” which is typically a constructionist one (Franke, 2022, 1). He then argues that the constructionist account often constitutes a gap between factual and evaluative claims. This gap leaves room for people to admit the fact of algorithmic transparency as manipulation, but that they may not evaluate this manipulation as good or bad. In this light, he seems to imply that algorithmic transparency as manipulation is not in itself wrong, but more of a matter of personal choice: different individuals may adopt “different evaluative attitudes towards” the manipulative risks of algorithmic transparency (Franke, 2022, 1).
In this short reply, despite agreeing with some of Franke’s main points, I will point out three crucial misconceptions in his arguments. Accordingly, I spell out some reasons to defend why we need to worry about algorithmic transparency as manipulation. I end by re-emphasizing that we should care about this manipulation because we as society have a moral duty to do so.
2 Manipulation Is Not Necessarily a Foucauldian One
Franke claims I offer “a Foucauldian analysis of algorithmic transparency as part of a disciplinary power” (Franke, 2022, 2). This claim is partly true. In my original article, I indeed showed how algorithmic transparency works as a disciplinary technique, and the title of the paper is also about “uncovering the disciplinary power of algorithmic transparency” (Wang, 2022b, 1). However, I focus on disciplinary power because I use the credit scoring system, a particular disciplinary system, as a case study to reflect how the operation of asymmetrical power in general can manipulate people’s behavior via algorithmic transparency.
This asymmetrical power is deeply rooted in the algorithmic society, where powerful entities manage to “turn individuals into ranked and rated objects” (Citron & Pasquale, 2014, 3). As Shoshana Zuboff worries, the power gap between users and surveillance capitalists is large:
(Surveillance capitalism) represents an unprecedented concentration of knowledge and the power that accrues to such knowledge. They know everything about us, but we know little about them. They predict our futures, but for the sake of others’ gain (Zuboff & Laidler, 2019).
Under this asymmetrical power structure, as discussed in my original paper, there is often room for commercial entities to manipulate consumers’ behavior. But this manipulation is not necessarily a Foucauldian one. Companies can directly manipulate people’s behavior by changing the architecture of choice, which does not necessarily need norms or penalties (Susser et al., 2019; Wang, 2022a; Yeung, 2017). Moreover, in the context of algorithmic transparency, commercial entities can use a strategic transparency of their algorithms “as a psychological tool to soothe” the public and regulators (Weller, 2017, 57). For example, some big tech firms, like Google and Facebook, have built their own “transparent” AI projects to make their complex algorithms more explicable (Tsamados et al., 2022, 219). This so-called algorithmic transparency, however, does not fundamentally mitigate the problem of AI manipulation (e.g., looking at the scandal of Cambridge Analytics; Hu, 2020, 1). This algorithmic transparency can be criticized as a kind of “ethics washing” to escape more extensive regulations (Yeung et al., 2019; Wagner, 2018; Bietti, 2021).
3 The Power Account Is Not a Constructionist One
According to Franke’s interpretation, the power account I propose is “a constructionist account of algorithmic transparency,” which fits a general constructionist pattern (Franke, 2022, 2, emphasis in original). Following that pattern, algorithmic transparency is seen as an objective and natural thing, which “is taken for granted and appears inevitable,” but in fact, it is constructed by power and interests (Franke, 2022, 2).
While this account captures some critical features of my understanding of algorithmic transparency, there is a subtle but key difference: this constructionist claim assumes that objectivism and constructivism are generally inconsistent with each other. In other words, algorithmic transparency can be understood either as an objective thing or as a social fact that is shaped by power relations—it cannot be both. This inconsistency is not the point of my paper. As highlighted in my original article, the power account of algorithmic transparency should not replace the informational one, but rather the two complement and enrich each other, enabling a comprehensive form of algorithmic transparency that none of them can complete on its own (Wang, 2022b, 6):
Notedly, such a power analysis of algorithmic transparency does not mean that it is superior to the informational account or it can fully replace the latter. Rather, these two accounts are complementary, and both can be useful in illustrating different issues. The upshot is that when analyzing algorithmic transparency, we should take both accounts into consideration. We should not only disclose the information about how algorithms work, but also be alert to the hidden power structures and the way in which the disclosure happens can have profound and far-reaching effects that are often overlooked.
4 The Evaluation of Manipulation Is a Political Issue
Franke shows that people may have good reasons not to care that much about algorithmic transparency as manipulation. After all, we can imagine how people can be cognitively and psychologically overloaded by reflecting on every belief and action in their daily lives.
Nevertheless, my argument is that evaluating algorithmic transparency as manipulation is a political matter that extends beyond the individual level. To be sure, individuals are not required to reflect on every detail of their life. However, it should at least be possible for people to have the capacity to reflect when they want to. Some individuals may not be concerned with using credit cards or cash, or with gaining or losing economic benefits, but sometimes they may, for example, worry about how algorithmic systems can manipulate their political views. Different people may have different attitudes towards how much manipulation they care about, but the reflexive capacity which is crucial for a robust democratic society should be preserved in societies where algorithms shape so much of our behavior. This reflexive capacity is not simply an individual’s choice but rather a significant value for democracy. Many critical studies have shown how artificial intelligence (AI) not only restrains people’s willingness to engage in deliberation, but also undermines critical thinking (Zuboff, 2019; Susser et al., 2019; Wang, 2022a). Therefore, we as society have the duty to build algorithmic systems that can ensure the healthy development of humans’ deliberative capacity.
A further and related point is that there is a moral obligation to improve an inherently immoral and unjust system, even if people who live in the system may not care or some may even feel “happy.” For example, slaves are sometimes narrated as joyful, singing songs, and well-treated in 19th-century America (Kolchin, 1993).Footnote 1 Even if some slaves do feel happy, the inherent immorality of the system indicates that the system needs to be abolished on the political level. This analysis of “happy slave” can be helpful to understand the political meaning of manipulation. Manipulation is “morally objectionable because it exploits individuals’ vulnerabilities, and directs their behavior in ways that are likely to be to the benefit of the manipulator” (Wang, 2022b, 18). In this sense, manipulation is inherently immoral. Individuals may not care about the risks of manipulation, but we as society have the moral obligation to ensure those systems are managed in a responsible and non-manipulative fashion.
5 Conclusion: Design for the Value of Transparency
My main proposal is to design algorithmic systems by incorporating the value of transparency. That means not only that we need to make algorithms as transparent as possible by disclosing the information, but also that we should be more sensitive to the issues of power. This consideration of power will be a big challenge for the design, but we have a second-order duty to do so. According to Ruth Marcus, “One ought to act in such a way that, if one ought to do X and one ought to do Y, then one can do both X and Y” (Marcus, 1980, 135). This regulative principle suggests that if we ought to make algorithms more transparent and we ought to make it more sensitive to the power relations, then we have a moral duty to realize both simultaneously. This second-order duty “entails a collective responsibility to create the circumstances in which we as society can live by our moral obligations and our moral values” (Van den Hoven et al., 2012, 149).
Data Availability
Not applicable.
Notes
These narratives have been debunked as a way of romanticizing the dark history of slavery in the USA (Douglass, 2009).
Abbreviations
- AI:
-
Artificial intelligence
References
Bietti, E. (2021). From ethics washing to ethics bashing: A moral philosophy view on tech ethics. Journal of Social Computing, 2(3), 266–283.
Citron, D. K., & Pasquale, F. A. (2014). The scored society: Due process for automated predictions. Washington Law Review, 89(1), 1–34.
Douglass, F. (2009). Narrative of the life of Frederick Douglass: An American slave, written by himself. The Belknap Press of Harvard University Press.
Franke, U. (2022). How much should you care about algorithmic transparency as manipulation? Philosophy & Technology, 35(4), 1–7.
Hu, M. (2020). Cambridge analytica’s black box. Big Data & Society, 7(2), 1–6. https://doi.org/10.1177/2053951720938091
Kolchin, P. (1993). American Slavery 1619–1877. Hill & Wang.
Marcus, R. B. (1980). Moral dilemmas and consistency. Journal of Philosophy, 77, 121–136.
Susser, D., Rössler, B., & Nissenbaum, H. (2019). Online manipulation: Hidden influences in a digital world. Georgetown Law Technology Review, 4(1), 1–45. https://doi.org/10.2139/ssrn.3306006
Tsamados, A., Aggarwal, N., Cowls, J., Morley, J., Roberts, H., Taddeo, M., & Floridi, L. (2022). The ethics of algorithms: Key problems and solutions. AI & SOCIETY, 37(1), 215–230.
Van den Hoven, J., Lokhorst, G. J., & Van de Poel, I. (2012). Engineering and the problem of moral overload. Science and Engineering Ethics, 18(1), 143–155.
Wagner, B. (2018). “Ethics as an escape from regulation: From ethics-washing to ethics-shopping” in Emre Bayamlioglu, E. et al., Being Profiled: Cogitas Ergo Sum: 10 Years of Profiling the European Citizen, Amsterdam University Press, Amsterdam, pp. 84–89.
Wang, H. (2022b). Transparency as manipulation? Uncovering the disciplinary power of algorithmic transparency. Philosophy & Technology, 35(69), 1–25. https://doi.org/10.1007/s13347-022-00564-w
Wang, H (2022a). Algorithmic colonization: Automating love and trust in the age of big data. UvA-DARE (Digital Academic Repository). https://hdl.handle.net/11245.1/8ff2fdb8-90b1-445c-9afe-cda0dbd39dd8, Available at SSRN: https://doi.org/10.2139/ssrn.4311017
Weller, A. (2017). “Challenges for transparency,” in: ICML Workshop on Human Interpretability. https://doi.org/10.48550/arXiv.1708.01870
Yeung, K. (2017). ‘Hypernudge’: Big data as a mode of regulation by design. Information, Communication & Society, 20(1), 118–136.
Yeung, K., Howes, A., & Pogrebna, G. (2019). AI governance by human rights-centred design, deliberation and oversight: An end to ethics washing. Oxford University Press.
Zuboff, S. (2019). The age of surveillance capitalism: The fight for the future at the new frontier of power. Profile Books.
Zuboff, S. & Laidler, J. (2019). High tech is watching you. Retrieved from https://news.harvard.edu/gazette/story/2019/03/harvard-professor-says-surveillance-capitalism-is-undermining-democracy/. Accessed 14 Feb 2023.
Acknowledgements
I would like to extend my sincere thanks to Liu and Lexi for their continuous support. I am also grateful for Gerrit Schaafsma’s helpful comments.
Funding
The author disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the seed grant from Human (e) AI and Civic AI Lab at the University of Amsterdam.
Author information
Authors and Affiliations
Contributions
The author confirms sole responsibility for the manuscript.
Corresponding author
Ethics declarations
Ethics Approval and Consent to Participate
Not applicable.
Consent for Publication
Not applicable.
Competing Interests
The author declares no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Wang, H. Why Should We Care About the Manipulative Power of Algorithmic Transparency?. Philos. Technol. 36, 9 (2023). https://doi.org/10.1007/s13347-023-00610-1
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s13347-023-00610-1